53 resultados para Gender in Performance

em Indian Institute of Science - Bangalore - Índia


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study investigates the potential of Relevance Vector Machine (RVM)-based approach to predict the ultimate capacity of laterally loaded pile in clay. RVM is a sparse approximate Bayesian kernel method. It can be seen as a probabilistic version of support vector machine. It provides much sparser regressors without compromising performance, and kernel bases give a small but worthwhile improvement in performance. RVM model outperforms the two other models based on root-mean-square-error (RMSE) and mean-absolute-error (MAE) performance criteria. It also stimates the prediction variance. The results presented in this paper clearly highlight that the RVM is a robust tool for prediction Of ultimate capacity of laterally loaded piles in clay.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Various intrusion detection systems (IDSs) reported in the literature have shown distinct preferences for detecting a certain class of attack with improved accuracy, while performing moderately on the other classes. In view of the enormous computing power available in the present-day processors, deploying multiple IDSs in the same network to obtain best-of-breed solutions has been attempted earlier. The paper presented here addresses the problem of optimizing the performance of IDSs using sensor fusion with multiple sensors. The trade-off between the detection rate and false alarms with multiple sensors is highlighted. It is illustrated that the performance of the detector is better when the fusion threshold is determined according to the Chebyshev inequality. In the proposed data-dependent decision ( DD) fusion method, the performance optimization of ndividual IDSs is first addressed. A neural network supervised learner has been designed to determine the weights of individual IDSs depending on their reliability in detecting a certain attack. The final stage of this DD fusion architecture is a sensor fusion unit which does the weighted aggregation in order to make an appropriate decision. This paper theoretically models the fusion of IDSs for the purpose of demonstrating the improvement in performance, supplemented with the empirical evaluation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fallibility is inherent in human cognition and so a system that will monitor performance is indispensable. While behavioral evidence for such a system derives from the finding that subjects slow down after trials that are likely to produce errors, the neural and behavioral characterization that enables such control is incomplete. Here, we report a specific role for dopamine/basal ganglia in response conflict by accessing deficits in performance monitoring in patients with Parkinson's disease. To characterize such a deficit, we used a modification of the oculomotor countermanding task to show that slowing down of responses that generate robust response conflict, and not post-error per se, is deficient in Parkinson's disease patients. Poor performance adjustment could be either due to impaired ability to slow RT subsequent to conflicts or due to impaired response conflict recognition. If the latter hypothesis was true, then PD subjects should show evidence of impaired error detection/correction, which was found to be the case. These results make a strong case for impaired performance monitoring in Parkinson's patients.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we introduce an analytical technique based on queueing networks and Petri nets for making a performance analysis of dataflow computations when executed on the Manchester machine. This technique is also applicable for the analysis of parallel computations on multiprocessors. We characterize the parallelism in dataflow computations through a four-parameter characterization, namely, the minimum parallelism, the maximum parallelism, the average parallelism and the variance in parallelism. We observe through detailed investigation of our analytical models that the average parallelism is a good characterization of the dataflow computations only as long as the variance in parallelism is small. However, significant difference in performance measures will result when the variance in parallelism is comparable to or higher than the average parallelism.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider a wireless sensor network whose main function is to detect certain infrequent alarm events, and to forward alarm packets to a base station, using geographical forwarding. The nodes know their locations, and they sleep-wake cycle, waking up periodically but not synchronously. In this situation, when a node has a packet to forward to the sink, there is a trade-off between how long this node waits for a suitable neighbor to wake up and the progress the packet makes towards the sink once it is forwarded to this neighbor. Hence, in choosing a relay node, we consider the problem of minimizing average delay subject to a constraint on the average progress. By constraint relaxation, we formulate this next hop relay selection problem as a Markov decision process (MDP). The exact optimal solution (BF (Best Forward)) can be found, but is computationally intensive. Next, we consider a mathematically simplified model for which the optimal policy (SF (Simplified Forward)) turns out to be a simple one-step-look-ahead rule. Simulations show that SF is very close in performance to BF, even for reasonably small node density. We then study the end-to-end performance of SF in comparison with two extremal policies: Max Forward (MF) and First Forward (FF), and an end-to-end delay minimising policy proposed by Kim et al. 1]. We find that, with appropriate choice of one hop average progress constraint, SF can be tuned to provide a favorable trade-off between end-to-end packet delay and the number of hops in the forwarding path.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the main disturbances in EEG signals is EMG artefacts generated by muscle movements. In the paper, the use of a linear phase FIR digital low-pass filter with finite wordlength precision coefficients is proposed, designed using the compensation procedure, to minimise EMG artefacts in contaminated EEG signals. To make the filtering more effective, different structures are used, i.e. cascading, twicing and sharpening (apart from simple low-pass filtering) of the designed FIR filter Modifications are proposed to twicing and sharpening structures to regain the linear phase characteristics that are lost in conventional twicing and sharpening operations. The efficacy of all these transformed filters in minimising EMG artefacts is studied, using SNR improvements as a performance measure for simulated signals. Time plots of the signals are also compared. Studies show that the modified sharpening structure is superior in performance to all other proposed methods. These algorithms have also been applied to real or recorded EMG-contaminated EEG signal. Comparison of time plots, and also the output SNR, show that the proposed modified sharpened structure works better in minimising EMG artefacts compared with other methods considered.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Longevity remains as one of the central issues in the successful commercialization of polymer electrolyte membrane fuel cells (PEMFCs) and primarily hinges on the durability of the cathode. Incorporation of gold (Au) to platinum (Pt) is known to ameliorate both the electrocatalytic activity and stability of cathode in relation to pristine Pt-cathodes that are currently being used in PEMFCs. In this study, an accelerated stress test (AST) is conducted to simulate prolonged fuel-cell operating conditions by potential cycling the carbon-supported Pt-Au (Pt-Au/C) cathode. The loss in performance of PEMFC with Pt-Au/C cathode is found to be similar to 10% after 7000 accelerated potential-cycles as against similar to 60% for Pt/C cathode under similar conditions. These data are in conformity with the electrochemical surface-area values. PEMFC with Pt-Au/C cathode can withstand > 10 000 potential cycles with very little effect on its performance. X-ray diffraction and transmission electron microscopy studies on the catalyst before and after AST suggest that incorporating Au with Pt helps mitigate aggregation of Pt particles during prolonged fuel-cell operations while X-ray photoelectron spectroscopy reflects that the metallic nature of Pt is retained in the Pt-Au catalyst during AST in comparison to Pt/C that shows a major portion of Pt to be present as oxidic platinum. Field-emission scanning electron microscopy conducted on the membrane electrode assembly before and after AST suggests that incorporating Au with Pt helps mitigating deformations in the catalyst layer.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Earlier studies have exploited statistical multiplexing of flows in the core of the Internet to reduce the buffer requirement in routers. Reducing the memory requirement of routers is important as it enables an improvement in performance and at the same time a decrease in the cost. In this paper, we observe that the links in the core of the Internet are typically over-provisioned and this can be exploited to reduce the buffering requirement in routers. The small on-chip memory of a network processor (NP) can be effectively used to buffer packets during most regimes of traffic. We propose a dynamic buffering strategy which buffers packets in the receive and transmit buffers of a NP when the memory requirement is low. When the buffer requirement increases due to bursts in the traffic, memory is allocated to packets in the off-chip DRAM. This scheme effectively mitigates the DRAM access bottleneck, as only a part of the traffic is stored in the DRAM. We build a Petri net model and evaluate the proposed scheme with core Internet like traffic. At 77% link utilization, the dynamic buffering scheme has a drop rate of just 0.65%, whereas the traditional DRAM buffering has 4.64% packet drop rate. Even with a high link utilization of 90%, which rarely happens in the core, our dynamic buffering results in a packet drop rate of only 2.17%, while supporting a throughput of 7.39 Gbps. We study the proposed scheme under different conditions to understand the provisioning of processing threads and to determine the queue length at which packets must be buffered in the DRAM. We show that the proposed dynamic buffering strategy drastically reduces the buffering requirement while still maintaining low packet drop rates.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The conventional metal oxide semiconductor field effect transistor (MOSFET)may not be suitable for future low standby power (LSTP) applications due to its high off-state current as the sub-threshold swing is theoretically limited to 60mV/decade. Tunnel field effect transistor (TFET) based on gate controlled band to band tunneling has attracted attention for such applications due to its extremely small sub-threshold swing (much less than 60mV/decade). This paper takes a simulation approach to gain some insight into its electrostatics and the carrier transport mechanism. Using 2D device simulations, a thorough study and analysis of the electrical parameters of the planar double gate TFET is performed. Due to excellent sub-threshold characteristics and a reverse biased structure, it offers orders of magnitude less leakage current compared to the conventional MOSFET. In this work, it is shown that the device can be scaled down to channel lengths as small as 30 nm without affecting its performance. Also, it is observed that the bulk region of the device plays a major role in determining the sub-threshold characteristics of the device and considerable improvement in performance (in terms of ION/IOFF ratio) can be achieved if the thickness of the device is reduced. An ION/IOFF ratio of 2x1012 and a minimum point sub-threshold swing of 22mV/decade is obtained.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider several WLAN stations associated at rates r(1), r(2), ... r(k) with an Access Point. Each station (STA) is downloading a long file from a local server, located on the LAN to which the Access Point (AP) is attached, using TCP. We assume that a TCP ACK will be produced after the reception of d packets at an STA. We model these simultaneous TCP-controlled transfers using a semi-Markov process. Our analytical approach leads to a procedure to compute aggregate download, as well as per-STA throughputs, numerically, and the results match simulations very well. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we estimate the solution of the electromigration diffusion equation (EMDE) in isotopically pure and impure metallic single-walled carbon nanotubes (CNTs) (SWCNTs) by considering self-heating. The EMDE for SWCNT has been solved not only by invoking the dependence of the electromigration flux on the usual applied static electric field across its two ends but also by considering a temperature-dependent thermal conductivity (κ) which results in a variable temperature distribution (T) along its length due to self-heating. By changing its length and isotopic impurity, we demonstrate that there occurs a significant deviation in the SWCNT electromigration performance. However, if κ is assumed to be temperature independent, the solution may lead to serious errors in performance estimation. We further exhibit a tradeoff between length and impurity effect on the performance toward electromigration. It is suggested that, to reduce the vacancy concentration in longer interconnects of few micrometers, one should opt for an isotopically impure SWCNT at the cost of lower κ, whereas for comparatively short interconnects, pure SWCNT should be used. This tradeoff presented here can be treated as a way for obtaining a fairly well estimation of the vacancy concentration and mean time to failure in the bundles of CNT-based interconnects. © 2012 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Solar photovoltaic power plants are ideally located in regions with high insolation levels. Photovoltaic performance is affected by high cell temperatures, soiling, mismatch and other balance-of-systems related losses. It is crucial to understand the significance of each of these losses on system performance. Soiling, highly dependent on installation conditions, is a complex performance issue to accurately quantify. The settlement of dust on panel surfaces may or may not be uniform depending on local terrain and environmental factors such as ambient temperature, wind and rainfall. It is essential to investigate the influence of dust settlement on the operating characteristics of photovoltaic systems to better understand losses in performance attributable to soiling. The current voltage (I-V) characteristics of photovoltaic panels reveal extensive information to support degradation analysis of the panels. This paper attempts to understand performance losses due to dust through a dynamic study into the I-V characteristics of panels under varying soiling conditions in an outdoor experimental test-bed. Further, the results of an indoor study simulating the performance of photovoltaic panels under different dust deposition regimes are discussed in this paper. (C) 2014 Monto Mani. Published by Elsevier Ltd. This is all open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The first objective of this paper is to show that a single-stage adsorption based cooling-cum-desalination system cannot be used if air cooled heat rejection is used under tropical conditions. This objective is achieved by operating a silica gel + water adsorption chiller first in a single-stage mode and then in a 2-stage mode with 2 beds/stage in each case. The second objective is to improve upon the simulation results obtained earlier by way of empirically describing the thermal wave phenomena during switching of operation of beds between adsorption and desorption and vice versa. Performance indicators, namely, cooling capacity, coefficient of performance and desalinated water output are extracted for various evaporator pressures and half cycle times. The improved simulation model is found to interpret experimental results more closely than the earlier one. Reasons for decline in performance indicators between theoretical and actual scenarios are appraised. (C) 2015 Elsevier Ltd and IIR. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The random early detection (RED) technique has seen a lot of research over the years. However, the functional relationship between RED performance and its parameters viz,, queue weight (omega(q)), marking probability (max(p)), minimum threshold (min(th)) and maximum threshold (max(th)) is not analytically availa ble. In this paper, we formulate a probabilistic constrained optimization problem by assuming a nonlinear relationship between the RED average queue length and its parameters. This problem involves all the RED parameters as the variables of the optimization problem. We use the barrier and the penalty function approaches for its Solution. However (as above), the exact functional relationship between the barrier and penalty objective functions and the optimization variable is not known, but noisy samples of these are available for different parameter values. Thus, for obtaining the gradient and Hessian of the objective, we use certain recently developed simultaneous perturbation stochastic approximation (SPSA) based estimates of these. We propose two four-timescale stochastic approximation algorithms based oil certain modified second-order SPSA updates for finding the optimum RED parameters. We present the results of detailed simulation experiments conducted over different network topologies and network/traffic conditions/settings, comparing the performance of Our algorithms with variants of RED and a few other well known adaptive queue management (AQM) techniques discussed in the literature.