908 resultados para Static-order-trade-off


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In systems biology, questions concerning the molecular and cellular makeup of an organism are of utmost importance, especially when trying to understand how unreliable components-like genetic circuits, biochemical cascades, and ion channels, among others-enable reliable and adaptive behaviour. The repertoire and speed of biological computations are limited by thermodynamic or metabolic constraints: an example can be found in neurons, where fluctuations in biophysical states limit the information they can encode-with almost 20-60% of the total energy allocated for the brain used for signalling purposes, either via action potentials or by synaptic transmission. Here, we consider the imperatives for neurons to optimise computational and metabolic efficiency, wherein benefits and costs trade-off against each other in the context of self-organised and adaptive behaviour. In particular, we try to link information theoretic (variational) and thermodynamic (Helmholtz) free-energy formulations of neuronal processing and show how they are related in a fundamental way through a complexity minimisation lemma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Identifying the determinants of neuronal energy consumption and their relationship to information coding is critical to understanding neuronal function and evolution. Three of the main determinants are cell size, ion channel density, and stimulus statistics. Here we investigate their impact on neuronal energy consumption and information coding by comparing single-compartment spiking neuron models of different sizes with different densities of stochastic voltage-gated Na+ and K+ channels and different statistics of synaptic inputs. The largest compartments have the highest information rates but the lowest energy efficiency for a given voltage-gated ion channel density, and the highest signaling efficiency (bits spike(-1)) for a given firing rate. For a given cell size, our models revealed that the ion channel density that maximizes energy efficiency is lower than that maximizing information rate. Low rates of small synaptic inputs improve energy efficiency but the highest information rates occur with higher rates and larger inputs. These relationships produce a Law of Diminishing Returns that penalizes costly excess information coding capacity, promoting the reduction of cell size, channel density, and input stimuli to the minimum possible, suggesting that the trade-off between energy and information has influenced all aspects of neuronal anatomy and physiology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Subtle manipulation of mutual repulsion and polarisation effects between polar and polarisable chromophores forced in closed proximity allows achieving major (100%) enhancement of the first hyperpolarisability together with increased transparency, breaking the well-known nonlinearity-transparency trade-off paradigm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A link level reliable multicast requires a channel access protocol to resolve the collision of feedback messages sent by multicast data receivers. Several deterministic media access control protocols have been proposed to attain high reliability, but with large delay. Besides, there are also protocols which can only give probabilistic guarantee about reliability, but have the least delay. In this paper, we propose a virtual token-based channel access and feedback protocol (VTCAF) for link level reliable multicasting. The VTCAF protocol introduces a virtual (implicit) token passing mechanism based on carrier sensing to avoid the collision between feedback messages. The delay performance is improved in VTCAF protocol by reducing the number of feedback messages. Besides, the VTCAF protocol is parametric in nature and can easily trade off reliability with the delay as per the requirement of the underlying application. Such a cross layer design approach would be useful for a variety of multicast applications which require reliable communication with different levels of reliability and delay performance. We have analyzed our protocol to evaluate various performance parameters at different packet loss rate and compared its performance with those of others. Our protocol has also been simulated using Castalia network simulator to evaluate the same performance parameters. Simulation and analytical results together show that the VTCAF protocol is able to considerably reduce average access delay while ensuring very high reliability at the same time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

India needs to significantly increase its electricity consumption levels, in a sustainable manner, if it has to ensure rapid economic development, a goal that remains the most potent tool for delivering adaptation capacity to its poor who will suffer the worst consequences of climate change. Resource/supply constraints faced by conventional energy sources, techno-economic constraints faced by renewable energy sources, and the bounds imposed by climate change on fossil fuel use are likely to undermine India's quest for having a robust electricity system that can effectively contribute to achieving accelerated, sustainable and inclusive economic growth. One possible way out could be transitioning into a sustainable electricity system, which is a trade-off solution having taken into account the economic, social and environmental concerns. As a first step toward understanding this transition, we contribute an indicator based hierarchical multidimensional framework as an analytical tool for sustainability assessment of electricity systems, and validate it for India's national electricity system. We evaluate Indian electricity system using this framework by comparing it with a hypothetical benchmark sustainable electrical system, which was created using best indicator values realized across national electricity systems in the world. This framework, we believe, can be used to examine the social, economic and environmental implications of the current Indian electricity system as well as setting targets for future development. The analysis with the indicator framework provides a deeper understanding of the system, identify and quantify the prevailing sustainability gaps and generate specific targets for interventions. We use this framework to compute national electricity system sustainability index (NESSI) for India. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Remote sensing of physiological parameters could be a cost effective approach to improving health care, and low-power sensors are essential for remote sensing because these sensors are often energy constrained. This paper presents a power optimized photoplethysmographic sensor interface to sense arterial oxygen saturation, a technique to dynamically trade off SNR for power during sensor operation, and a simple algorithm to choose when to acquire samples in photoplethysmography. A prototype of the proposed pulse oximeter built using commercial-off-the-shelf (COTS) components is tested on 10 adults. The dynamic adaptation techniques described reduce power consumption considerably compared to our reference implementation, and our approach is competitive to state-of-the-art implementations. The techniques presented in this paper may be applied to low-power sensor interface designs where acquiring samples is expensive in terms of power as epitomized by pulse oximetry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cache analysis plays a very important role in obtaining precise Worst Case Execution Time (WCET) estimates of programs for real-time systems. While Abstract Interpretation based approaches are almost universally used for cache analysis, they fail to take advantage of its unique requirement: it is not necessary to find the guaranteed cache behavior that holds across all executions of a program. We only need the cache behavior along one particular program path, which is the path with the maximum execution time. In this work, we introduce the concept of cache miss paths, which allows us to use the worst-case path information to improve the precision of AI-based cache analysis. We use Abstract Interpretation to determine the cache miss paths, and then integrate them in the IPET formulation. An added advantage is that this further allows us to use infeasible path information for cache analysis. Experimentally, our approach gives more precise WCETs as compared to AI-based cache analysis, and we also provide techniques to trade-off analysis time with precision to provide scalability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multicast in wireless sensor networks (WSNs) is an efficient way to spread the same data to multiple sensor nodes. It becomes more effective due to the broadcast nature of wireless link, where a message transmitted from one source is inherently received by all one-hop receivers, and therefore, there is no need to transmit the message one by one. Reliable multicast in WSNs is desirable for critical tasks like code updation and query based data collection. The erroneous nature of wireless medium coupled with limited resource of sensor nodes, makes the design of reliable multicast protocol a challenging task. In this work, we propose a time division multiple access (TDMA) based energy aware media access and control (TEA-MAC) protocol for reliable multicast in WSNs. The TDMA eliminates collisions, overhearing and idle listening, which are the main sources of reliability degradation and energy consumption. Furthermore, the proposed protocol is parametric in the sense that it can be used to trade-off reliability with energy and delay as per the requirement of the underlying applications. The performance of TEA-MAC has been evaluated by simulating it using Castalia network simulator. Simulation results show that TEA-MAC is able to considerably improve the performance of multicast communication in WSNs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Small size actuators (8 mm x 1 mm), IPMNC (RuO2/Nafion) and IPMNC (LbL/CNC) are studied for flapping at the frequency of insects and compared to Platinum IPMC-Pt. Flapping wing actuators based on IPMNC (RuO2/Nafion) are modeled with the size of three dragonfly species. To achieve maximum actuation performance with Sympetrum Frequens scale actuator with optimized Young's modulus, the effect of variation of thickness of electrode and Nafion region of Sympetrum Frequens scale actuator is studied. A trade-off in the electrode thickness and Young's modulus for dragonfly size IPMNC-RuO2/Nafion actuator is essential to achieve the desirable flapping performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

3D thermo-electro-mechanical device simulations are presented of a novel fully CMOS-compatible MOSFET gas sensor operating in a SOI membrane. A comprehensive stress analysis of a Si-SiO2-based multilayer membrane has been performed to ensure a high degree of mechanical reliability at a high operating temperature (e.g. up to 400°C). Moreover, optimisation of the layout dimensions of the SOI membrane, in particular the aspect ratio between the membrane length and membrane thickness, has been carried out to find the best trade-off between minimal device power consumption and acceptable mechanical stress.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Revised: 2006-07

Relevância:

100.00% 100.00%

Publicador:

Resumo:

11 p.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Singular Value Decomposition (SVD) is a key linear algebraic operation in many scientific and engineering applications. In particular, many computational intelligence systems rely on machine learning methods involving high dimensionality datasets that have to be fast processed for real-time adaptability. In this paper we describe a practical FPGA (Field Programmable Gate Array) implementation of a SVD processor for accelerating the solution of large LSE problems. The design approach has been comprehensive, from the algorithmic refinement to the numerical analysis to the customization for an efficient hardware realization. The processing scheme rests on an adaptive vector rotation evaluator for error regularization that enhances convergence speed with no penalty on the solution accuracy. The proposed architecture, which follows a data transfer scheme, is scalable and based on the interconnection of simple rotations units, which allows for a trade-off between occupied area and processing acceleration in the final implementation. This permits the SVD processor to be implemented both on low-cost and highend FPGAs, according to the final application requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract to Part I

The inverse problem of seismic wave attenuation is solved by an iterative back-projection method. The seismic wave quality factor, Q, can be estimated approximately by inverting the S-to-P amplitude ratios. Effects of various uncertain ties in the method are tested and the attenuation tomography is shown to be useful in solving for the spatial variations in attenuation structure and in estimating the effective seismic quality factor of attenuating anomalies.

Back-projection attenuation tomography is applied to two cases in southern California: Imperial Valley and the Coso-Indian Wells region. In the Coso-Indian Wells region, a highly attenuating body (S-wave quality factor (Q_β ≈ 30) coincides with a slow P-wave anomaly mapped by Walck and Clayton (1987). This coincidence suggests the presence of a magmatic or hydrothermal body 3 to 5 km deep in the Indian Wells region. In the Imperial Valley, slow P-wave travel-time anomalies and highly attenuating S-wave anomalies were found in the Brawley seismic zone at a depth of 8 to 12 km. The effective S-wave quality factor is very low (Q_β ≈ 20) and the P-wave velocity is 10% slower than the surrounding areas. These results suggest either magmatic or hydrothermal intrusions, or fractures at depth, possibly related to active shear in the Brawley seismic zone.

No-block inversion is a generalized tomographic method utilizing the continuous form of an inverse problem. The inverse problem of attenuation can be posed in a continuous form , and the no-block inversion technique is applied to the same data set used in the back-projection tomography. A relatively small data set with little redundancy enables us to apply both techniques to a similar degree of resolution. The results obtained by the two methods are very similar. By applying the two methods to the same data set, formal errors and resolution can be directly computed for the final model, and the objectivity of the final result can be enhanced.

Both methods of attenuation tomography are applied to a data set of local earthquakes in Kilauea, Hawaii, to solve for the attenuation structure under Kilauea and the East Rift Zone. The shallow Kilauea magma chamber, East Rift Zone and the Mauna Loa magma chamber are delineated as attenuating anomalies. Detailed inversion reveals shallow secondary magma reservoirs at Mauna Ulu and Puu Oo, the present sites of volcanic eruptions. The Hilina Fault zone is highly attenuating, dominating the attenuating anomalies at shallow depths. The magma conduit system along the summit and the East Rift Zone of Kilauea shows up as a continuous supply channel extending down to a depth of approximately 6 km. The Southwest Rift Zone, on the other hand, is not delineated by attenuating anomalies, except at a depth of 8-12 km, where an attenuating anomaly is imaged west of Puu Kou. The Ylauna Loa chamber is seated at a deeper level (about 6-10 km) than the Kilauea magma chamber. Resolution in the Mauna Loa area is not as good as in the Kilauea area, and there is a trade-off between the depth extent of the magma chamber imaged under Mauna Loa and the error that is due to poor ray coverage. Kilauea magma chamber, on the other hand, is well resolved, according to a resolution test done at the location of the magma chamber.

Abstract to Part II

Long period seismograms recorded at Pasadena of earthquakes occurring along a profile to Imperial Valley are studied in terms of source phenomena (e.g., source mechanisms and depths) versus path effects. Some of the events have known source parameters, determined by teleseismic or near-field studies, and are used as master events in a forward modeling exercise to derive the Green's functions (SH displacements at Pasadena that are due to a pure strike-slip or dip-slip mechanism) that describe the propagation effects along the profile. Both timing and waveforms of records are matched by synthetics calculated from 2-dimensional velocity models. The best 2-dimensional section begins at Imperial Valley with a thin crust containing the basin structure and thickens towards Pasadena. The detailed nature of the transition zone at the base of the crust controls the early arriving shorter periods (strong motions), while the edge of the basin controls the scattered longer period surface waves. From the waveform characteristics alone, shallow events in the basin are easily distinguished from deep events, and the amount of strike-slip versus dip-slip motion is also easily determined. Those events rupturing the sediments, such as the 1979 Imperial Valley earthquake, can be recognized easily by a late-arriving scattered Love wave that has been delayed by the very slow path across the shallow valley structure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we have investigated the grating erasure of a reduced LiNbO3:Fe crystal with different erasing wavelengths. The overall hologram evolution in the process of grating erasure is nonexponential due to strong absorption which is contrary to the mono-exponential law. The hologram in the rear part of the crystal can persist for a long time in the grating erasure due to weak erasing light intensity by strong absorption, which can enlarge the erasure time constant. From the erasure experiments, the global absorption ad 5 can be taken as the optimum absorption to acquire a good trade-off between the sensitivity and hologram strength in the crystal. (c) 2006 Elsevier GmbH. All rights reserved.