232 resultados para Pseudo-Kahler metric


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The heat of adsorption of methane, ethane, carbon dioxide, R-507a and R-134a on several specimens of microporous activated carbons is derived from experimental adsorption data fitted to the Dubinin-Astakhov equation. These adsorption results are compared with literature data obtained from calorimetric measurements and from the pressure-temperature relation during isosteric heating/cooling. Because the adsorbed phase volume plays an important role, its dependence on temperature and pressure needs to be correctly assessed. In addition, for super-critical gas adsorption, the evaluation of the pseudo-saturation pressure also needs a judicious treatment. Based on the evaluation of carbon dioxide adsorption, it can be seen that sub-critical and super-critical adsorption show different temperature dependences of the isosteric heat of adsorption. The temperature and loading dependence of this property needs to be taken into account while designing practical systems. Some practical implications of these findings are enumerated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider a visual search problem studied by Sripati and Olson where the objective is to identify an oddball image embedded among multiple distractor images as quickly as possible. We model this visual search task as an active sequential hypothesis testing problem (ASHT problem). Chernoff in 1959 proposed a policy in which the expected delay to decision is asymptotically optimal. The asymptotics is under vanishing error probabilities. We first prove a stronger property on the moments of the delay until a decision, under the same asymptotics. Applying the result to the visual search problem, we then propose a ``neuronal metric'' on the measured neuronal responses that captures the discriminability between images. From empirical study we obtain a remarkable correlation (r = 0.90) between the proposed neuronal metric and speed of discrimination between the images. Although this correlation is lower than with the L-1 metric used by Sripati and Olson, this metric has the advantage of being firmly grounded in formal decision theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Our work is motivated by geographical forwarding of sporadic alarm packets to a base station in a wireless sensor network (WSN), where the nodes are sleep-wake cycling periodically and asynchronously. We seek to develop local forwarding algorithms that can be tuned so as to tradeoff the end-to-end delay against a total cost, such as the hop count or total energy. Our approach is to solve, at each forwarding node enroute to the sink, the local forwarding problem of minimizing one-hop waiting delay subject to a lower bound constraint on a suitable reward offered by the next-hop relay; the constraint serves to tune the tradeoff. The reward metric used for the local problem is based on the end-to-end total cost objective (for instance, when the total cost is hop count, we choose to use the progress toward sink made by a relay as the reward). The forwarding node, to begin with, is uncertain about the number of relays, their wake-up times, and the reward values, but knows the probability distributions of these quantities. At each relay wake-up instant, when a relay reveals its reward value, the forwarding node's problem is to forward the packet or to wait for further relays to wake-up. In terms of the operations research literature, our work can be considered as a variant of the asset selling problem. We formulate our local forwarding problem as a partially observable Markov decision process (POMDP) and obtain inner and outer bounds for the optimal policy. Motivated by the computational complexity involved in the policies derived out of these bounds, we formulate an alternate simplified model, the optimal policy for which is a simple threshold rule. We provide simulation results to compare the performance of the inner and outer bound policies against the simple policy, and also against the optimal policy when the source knows the exact number of relays. Observing the good performance and the ease of implementation of the simple policy, we apply it to our motivating problem, i.e., local geographical routing of sporadic alarm packets in a large WSN. We compare the end-to-end performance (i.e., average total delay and average total cost) obtained by the simple policy, when used for local geographical forwarding, against that obtained by the globally optimal forwarding algorithm proposed by Kim et al. 1].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data Prefetchers identify and make use of any regularity present in the history/training stream to predict future references and prefetch them into the cache. The training information used is typically the primary misses seen at a particular cache level, which is a filtered version of the accesses seen by the cache. In this work we demonstrate that extending the training information to include secondary misses and hits along with primary misses helps improve the performance of prefetchers. In addition to empirical evaluation, we use the information theoretic metric entropy, to quantify the regularity present in extended histories. Entropy measurements indicate that extended histories are more regular than the default primary miss only training stream. Entropy measurements also help corroborate our empirical findings. With extended histories, further benefits can be achieved by triggering prefetches during secondary misses also. In this paper we explore the design space of extended prefetch histories and alternative prefetch trigger points for delta correlation prefetchers. We observe that different prefetch schemes benefit to a different extent with extended histories and alternative trigger points. Also the best performing design point varies on a per-benchmark basis. To meet these requirements, we propose a simple adaptive scheme that identifies the best performing design point for a benchmark-prefetcher combination at runtime. In SPEC2000 benchmarks, using all the L2 accesses as history for prefetcher improves the performance in terms of both IPC and misses reduced over techniques that use only primary misses as history. The adaptive scheme improves the performance of CZone prefetcher over Baseline by 4.6% on an average. These performance gains are accompanied by a moderate reduction in the memory traffic requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last decade, there has been a tremendous interest in Graphene transistors. The greatest advantage for CMOS nanoelectronics applications is the fact that Graphene is compatible with planar CMOS technology and potentially offers excellent short channel properties. Because of the zero bandgap, it will not be possible to turn off the MOSFET efficiently and hence the typical on current to off current ratio (Ion/Ioff) has been less than 10. Several techniques have been proposed to open the bandgap in Graphene. It has been demonstrated, both theoretically and experimentally, that Graphene Nanoribbons (GNR) show a bandgap which is inversely proportional to their width. GNRs with about 20 nm width have bandgaps in the range of 100meV. But it is very difficult to obtain GNRs with well defined edges. An alternate technique to open the band gap is to use bilayer Graphene (BLG), with an asymmetric bias applied in the direction perpendicular to their plane. Another important CMOS metric, the subthreshold slope is also limited by the inability to turn off the transistor. However, these devices could be attractive for RF CMOS applications. But even for analog and RF applications the non-saturating behavior of the drain current can be an issue. Although some studies have reported current saturation, the mechanisms are still not very clear. In this talk we present some of our recent findings, based on simulations and experiments, and propose possible solutions to obtain high on current to off current ratio. A detailed study on high field transport in grapheme transistors, relevant for analog and RF applications will also be presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Metal-ion (Ag, Co, Ni, and Pd) doped TiO2 nanocatalysts were successfully embedded on carbon-covered alumina supports. The CCA-embedded catalysts were crystalline and had a high surface area compared to the free metal-ion doped titania nanocatalysts while they still retained the anatase phase of the core TiO2. These catalysts were photocatalytically active under solar light irradiation. Rhodamine B was used as a model pollutant and the reactivity followed a pseudo-first-order reaction kinetics. The reaction rate of the CCA-supported catalysts was Pd > Ag > Co > Ni. Among the ratios of the CCA:catalyst used, it was found that the 1:1 ratio had the fastest reaction rate, followed by the 1:2 ratio, while the 2:1 ratio exhibited the lowest reaction rate. The CCA/metal-ion doped titania were found to have photocatalytic activities comparable with those of CCA-supported titania.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Opportunistic selection is a practically appealing technique that is used in multi-node wireless systems to maximize throughput, implement proportional fairness, etc. However, selection is challenging since the information about a node's channel gains is often available only locally at each node and not centrally. We propose a novel multiple access-based distributed selection scheme that generalizes the best features of the timer scheme, which requires minimal feedback but does not always guarantee successful selection, and the fast splitting scheme, which requires more feedback but guarantees successful selection. The proposed scheme's design explicitly accounts for feedback time overheads unlike the conventional splitting scheme and guarantees selection of the user with the highest metric unlike the timer scheme. We analyze and minimize the average time including feedback required by the scheme to select. With feedback overheads, the proposed scheme is scalable and considerably faster than several schemes proposed in the literature. Furthermore, the gains increase as the feedback overhead increases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper analyzes the error exponents in Bayesian decentralized spectrum sensing, i.e., the detection of occupancy of the primary spectrum by a cognitive radio, with probability of error as the performance metric. At the individual sensors, the error exponents of a Central Limit Theorem (CLT) based detection scheme are analyzed. At the fusion center, a K-out-of-N rule is employed to arrive at the overall decision. It is shown that, in the presence of fading, for a fixed number of sensors, the error exponents with respect to the number of observations at both the individual sensors as well as at the fusion center are zero. This motivates the development of the error exponent with a certain probability as a novel metric that can be used to compare different detection schemes in the presence of fading. The metric is useful, for example, in answering the question of whether to sense for a pilot tone in a narrow band (and suffer Rayleigh fading) or to sense the entire wide-band signal (and suffer log-normal shadowing), in terms of the error exponent performance. The error exponents with a certain probability at both the individual sensors and at the fusion center are derived, with both Rayleigh as well as log-normal shadow fading. Numerical results are used to illustrate and provide a visual feel for the theoretical expressions obtained.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an improved hierarchical clustering algorithm for land cover mapping problem using quasi-random distribution. Initially, Niche Particle Swarm Optimization (NPSO) with pseudo/quasi-random distribution is used for splitting the data into number of cluster centers by satisfying Bayesian Information Criteria (BIC). Themain objective is to search and locate the best possible number of cluster and its centers. NPSO which highly depends on the initial distribution of particles in search space is not been exploited to its full potential. In this study, we have compared more uniformly distributed quasi-random with pseudo-random distribution with NPSO for splitting data set. Here to generate quasi-random distribution, Faure method has been used. Performance of previously proposed methods namely K-means, Mean Shift Clustering (MSC) and NPSO with pseudo-random is compared with the proposed approach - NPSO with quasi distribution(Faure). These algorithms are used on synthetic data set and multi-spectral satellite image (Landsat 7 thematic mapper). From the result obtained we conclude that use of quasi-random sequence with NPSO for hierarchical clustering algorithm results in a more accurate data classification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper considers the problem of identifying the footprints of communication of multiple transmitters in a given geographical area. To do this, a number of sensors are deployed at arbitrary but known locations in the area, and their individual decisions regarding the presence or absence of the transmitters' signal are combined at a fusion center to reconstruct the spatial spectral usage map. One straightforward scheme to construct this map is to query each of the sensors and cluster the sensors that detect the primary's signal. However, using the fact that a typical transmitter footprint map is a sparse image, two novel compressive sensing based schemes are proposed, which require significantly fewer number of transmissions compared to the querying scheme. A key feature of the proposed schemes is that the measurement matrix is constructed from a pseudo-random binary phase shift applied to the decision of each sensor prior to transmission. The measurement matrix is thus a binary ensemble which satisfies the restricted isometry property. The number of measurements needed for accurate footprint reconstruction is determined using compressive sampling theory. The three schemes are compared through simulations in terms of a performance measure that quantifies the accuracy of the reconstructed spatial spectral usage map. It is found that the proposed sparse reconstruction technique-based schemes significantly outperform the round-robin scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chronic recording of neural signals is indispensable in designing efficient brain machine interfaces and in elucidating human neurophysiology. The advent of multichannel microelectrode arrays has driven the need for electronics to record neural signals from many neurons. The dynamic range of the system is limited by background system noise which varies over time. We propose a neural amplifier in UMC 130 nm, 2P8M CMOS technology. It can be biased adaptively from 200 nA to 2 uA, modulating input referred noise from 9.92 uV to 3.9 uV. We also describe a low noise design technique which minimizes the noise contribution of the load circuitry. The amplifier can pass signal from 5 Hz to 7 kHz while rejecting input DC offsets at electrode-electrolyte interface. The bandwidth of the amplifier can be tuned by the pseudo-resistor for selectively recording low field potentials (LFP) or extra cellular action potentials (EAP). The amplifier achieves a mid-band voltage gain of 37 dB and minimizes the attenuation of the signal from neuron to the gate of the input transistor. It is used in fully differential configuration to reject noise of bias circuitry and to achieve high PSRR.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents methodologies for incorporating phasor measurements into conventional state estimator. The angle measurements obtained from Phasor Measurement Units are handled as angle difference measurements rather than incorporating the angle measurements directly. Handling in such a manner overcomes the problems arising due to the choice of reference bus. Current measurements obtained from Phasor Measurement Units are treated as equivalent pseudo-voltage measurements at the neighboring buses. Two solution approaches namely normal equations approach and linear programming approach are presented to show how the Phasor Measurement Unit measurements can be handled. Comparative evaluation of both the approaches is also presented. Test results on IEEE 14 bus system are presented to validate both the approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a set of metrics that evaluate the uniformity, sharpness, continuity, noise, stroke width variance,pulse width ratio, transient pixels density, entropy and variance of components to quantify the quality of a document image. The measures are intended to be used in any optical character recognition (OCR) engine to a priori estimate the expected performance of the OCR. The suggested measures have been evaluated on many document images, which have different scripts. The quality of a document image is manually annotated by users to create a ground truth. The idea is to correlate the values of the measures with the user annotated data. If the measure calculated matches the annotated description,then the metric is accepted; else it is rejected. In the set of metrics proposed, some of them are accepted and the rest are rejected. We have defined metrics that are easily estimatable. The metrics proposed in this paper are based on the feedback of homely grown OCR engines for Indic (Tamil and Kannada) languages. The metrics are independent of the scripts, and depend only on the quality and age of the paper and the printing. Experiments and results for each proposed metric are discussed. Actual recognition of the printed text is not performed to evaluate the proposed metrics. Sometimes, a document image containing broken characters results in good document image as per the evaluated metrics, which is part of the unsolved challenges. The proposed measures work on gray scale document images and fail to provide reliable information on binarized document image.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The component and system reliability based design of bridge abutments under earthquake loading is presented in the paper. Planar failure surface has been used in conjunction with pseudo-dynamic approach to compute seismic active earth pressures on an abutment. The pseudo-dynamic method, considers the effect of phase difference in shear waves, soil amplification along with the horizontal seismic accelerations, strain localization in backfill soil and associated post-peak reduction in the shear resistance from peak to residual values along a previously formed failure plane. Four modes of stability viz. sliding, overturning, eccentricity and bearing capacity of the foundation soil are considered in the analysis. The series system reliability is computed with an assumption of independent failure modes. The lower and upper bounds of system reliability are also computed by taking into account the correlations between four failure modes, which is evaluated using the direction cosines of the tangent planes at the most probable points of failure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We address the problem of speech enhancement using a risk- estimation approach. In particular, we propose the use the Stein’s unbiased risk estimator (SURE) for solving the problem. The need for a suitable finite-sample risk estimator arises because the actual risks invariably depend on the unknown ground truth. We consider the popular mean-squared error (MSE) criterion first, and then compare it against the perceptually-motivated Itakura-Saito (IS) distortion, by deriving unbiased estimators of the corresponding risks. We use a generalized SURE (GSURE) development, recently proposed by Eldar for MSE. We consider dependent observation models from the exponential family with an additive noise model,and derive an unbiased estimator for the risk corresponding to the IS distortion, which is non-quadratic. This serves to address the speech enhancement problem in a more general setting. Experimental results illustrate that the IS metric is efficient in suppressing musical noise, which affects the MSE-enhanced speech. However, in terms of global signal-to-noise ratio (SNR), the minimum MSE solution gives better results.