53 resultados para Masculinity in performance


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The potential benefits of providing geocell reinforced sand mattress over clay subgrade with void have been investigated through a series of laboratory scale model tests. The parameters varied in the test programme include, thickness of unreinforced sand layer above clay bed, width and height of geocell mattress, relative density of the sand fill in the geocells, and influence of an additional layer of planar geogrid placed at the base of the geocell mattress. The test results indicate that substantial improvement in performance can be obtained with the provision of geocell mattress, of adequate size, over the clay subgrade with void. In order to have beneficial effect, the geocell mattress must spread beyond the void at least a distance equal to the diameter of the void. The influence of the void over the performance of the footing reduces for height of geocell mattress greater than 1.8 times the diameter of the footing. Better improvement in performance is obtained for geocells filled with dense soil. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper investigates the problem of designing reverse channel training sequences for a TDD-MIMO spatial-multiplexing system. Assuming perfect channel state information at the receiver and spatial multiplexing at the transmitter with equal power allocation to them dominant modes of the estimated channel, the pilot is designed to ensure an stimate of the channel which improves the forward link capacity. Using perturbation techniques, a lower bound on the forward link capacity is derived with respect to which the training sequence is optimized. Thus, the reverse channel training sequence makes use of the channel knowledge at the receiver. The performance of orthogonal training sequence with MMSE estimation at the transmitter and the proposed training sequence are compared. Simulation results show a significant improvement in performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A now procedure for the design of sensitivity-reduced control for linear regulators is described. The control is easily computable and implementable since it requires neither the solution of an increased-order augmented system nor the generation and feedback of a trajectory sensitivity vector. The method provides a trade-off between reduction in sensitivity measure and increase in performance index.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Further improvement in performance, to achieve near transparent quality LSF quantization, is shown to be possible by using a higher order two dimensional (2-D) prediction in the coefficient domain. The prediction is performed in a closed-loop manner so that the LSF reconstruction error is the same as the quantization error of the prediction residual. We show that an optimum 2-D predictor, exploiting both inter-frame and intra-frame correlations, performs better than existing predictive methods. Computationally efficient split vector quantization technique is used to implement the proposed 2-D prediction based method. We show further improvement in performance by using weighted Euclidean distance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper deals with low maximum-likelihood (ML)-decoding complexity, full-rate and full-diversity space-time block codes (STBCs), which also offer large coding gain, for the 2 transmit antenna, 2 receive antenna (2 x 2) and the 4 transmit antenna, 2 receive antenna (4 x 2) MIMO systems. Presently, the best known STBC for the 2 2 system is the Golden code and that for the 4 x 2 system is the DjABBA code. Following the approach by Biglieri, Hong, and Viterbo, a new STBC is presented in this paper for the 2 x 2 system. This code matches the Golden code in performance and ML-decoding complexity for square QAM constellations while it has lower ML-decoding complexity with the same performance for non-rectangular QAM constellations. This code is also shown to be information-lossless and diversity-multiplexing gain (DMG) tradeoff optimal. This design procedure is then extended to the 4 x 2 system and a code, which outperforms the DjABBA code for QAM constellations with lower ML-decoding complexity, is presented. So far, the Golden code has been reported to have an ML-decoding complexity of the order of for square QAM of size. In this paper, a scheme that reduces its ML-decoding complexity to M-2 root M is presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider the transmission of correlated Gaussian sources over orthogonal Gaussian channels. It is shown that the Amplify and Forward (AF) scheme which simplifies the design of encoders and the decoder, performs close to the optimal scheme even at high SNR. Also, it outperforms a recently proposed scalar quantizer scheme both in performance and complexity. We also study AF when there is side information at the encoders and decoder.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pricing is an effective tool to control congestion and achieve quality of service (QoS) provisioning for multiple differentiated levels of service. In this paper, we consider the problem of pricing for congestion control in the case of a network of nodes under a single service class and multiple queues, and present a multi-layered pricing scheme. We propose an algorithm for finding the optimal state dependent price levels for individual queues, at each node. The pricing policy used depends on a weighted average queue length at each node. This helps in reducing frequent price variations and is in the spirit of the random early detection (RED) mechanism used in TCP/IP networks. We observe in our numerical results a considerable improvement in performance using our scheme over that of a recently proposed related scheme in terms of both throughput and delay performance. In particular, our approach exhibits a throughput improvement in the range of 34 to 69 percent in all cases studied (over all routes) over the above scheme.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The motivation behind the fusion of Intrusion Detection Systems was the realization that with the increasing traffic and increasing complexity of attacks, none of the present day stand-alone Intrusion Detection Systems can meet the high demand for a very high detection rate and an extremely low false positive rate. Multi-sensor fusion can be used to meet these requirements by a refinement of the combined response of different Intrusion Detection Systems. In this paper, we show the design technique of sensor fusion to best utilize the useful response from multiple sensors by an appropriate adjustment of the fusion threshold. The threshold is generally chosen according to the past experiences or by an expert system. In this paper, we show that the choice of the threshold bounds according to the Chebyshev inequality principle performs better. This approach also helps to solve the problem of scalability and has the advantage of failsafe capability. This paper theoretically models the fusion of Intrusion Detection Systems for the purpose of proving the improvement in performance, supplemented with the empirical evaluation. The combination of complementary sensors is shown to detect more attacks than the individual components. Since the individual sensors chosen detect sufficiently different attacks, their result can be merged for improved performance. The combination is done in different ways like (i) taking all the alarms from each system and avoiding duplications, (ii) taking alarms from each system by fixing threshold bounds, and (iii) rule-based fusion with a priori knowledge of the individual sensor performance. A number of evaluation metrics are used, and the results indicate that there is an overall enhancement in the performance of the combined detector using sensor fusion incorporating the threshold bounds and significantly better performance using simple rule-based fusion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Durability is central to the commercialization of polymer electrolyte fuel cells (PEFCs). The incorporation of TiO2 with platinum (Pt) ameliorates both the stability and catalytic activity of cathodes in relation to pristine Pt cathodes currently being used in PEFCs. PEFC cathodes comprising carbon-supported Pt-TiO2 (Pt-TiO2/C) exhibit higher durability in relation to Pt/C cathodes as evidenced by cell polarization, impedance, and cyclic voltammetry data. The degradation in performance of the Pt-TiO2/C cathodes is 10% after 5000 test cycles as against 28% for Pt/C cathodes. These data are in conformity with the electrochemical surface area and impedance values. Pt-TiO2/C cathodes can withstand even 10,000 test cycles with nominal effect on their performance. X-ray diffraction, transmission electron microscope, and cross-sectional field-emission-scanning electron microscope studies on the catalytic electrodes reflect that incorporating TiO2 with Pt helps in mitigating the aggregation of Pt particles and protects the Nafion membrane against peroxide radicals formed during the cathodic reduction of oxygen. (C) 2010 The Electrochemical Society. [DOI: 10.1149/1.3421970] All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The overall performance of random early detection (RED) routers in the Internet is determined by the settings of their associated parameters. The non-availability of a functional relationship between the RED performance and its parameters makes it difficult to implement optimization techniques directly in order to optimize the RED parameters. In this paper, we formulate a generic optimization framework using a stochastically bounded delay metric to dynamically adapt the RED parameters. The constrained optimization problem thus formulated is solved using traditional nonlinear programming techniques. Here, we implement the barrier and penalty function approaches, respectively. We adopt a second-order nonlinear optimization framework and propose a novel four-timescale stochastic approximation algorithm to estimate the gradient and Hessian of the barrier and penalty objectives and update the RED parameters. A convergence analysis of the proposed algorithm is briefly sketched. We perform simulations to evaluate the performance of our algorithm with both barrier and penalty objectives and compare these with RED and a variant of it in the literature. We observe an improvement in performance using our proposed algorithm over RED, and the above variant of it.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

For biological experiments requiring manipulations under a microscope, it is necessary to have remote control for the manipulator. Available systems offer the required accuracy at a high cost. Passive micromanipulators are economical but are deficient in performance, the most serious defects being the inability to attenuate operator-induced vibrations and lack of speed control The manipulator described in this paper provides versatile remote control and may be constructed economically.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

For biological experiments requiring manipulations under a microscope, it is necessary to have remote control for the manipulator. Available systems offer the required accuracy at a high cost. Passive micromanipulators are economical but are deficient in performance, the most serious defects being the inability to attenuate operator-induced vibrations and lack of speed control The manipulator described in this paper provides versatile remote control and may be constructed economically.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Long-term deterioration in the performance of PEFCs is attributed largely to reduction in active area of the platinum catalyst at cathode, usually caused by carbon-support corrosion. It is found that the use of graphitic carbon as cathode-catalyst support enhances its long-term stability in relation to non-graphitic carbon. This is because graphitic-carbon-supported- Pt (Pt/GrC) cathodes exhibit higher resistance to carbon corrosion in-relation to non-graphitic-carbon-supported- Pt (Pt/Non-GrC) cathodes in PEFCs during accelerated stress test (AST) as evidenced by chronoamperometry and carbon dioxide studies. The corresponding change in electrochemical surface area (ESA), cell performance and charge-transfer resistance are monitored through cyclic voltammetry (CV), cell polarisation and impedance measurements, respectively. The degradation in performance of PEFC with Pt/GrC cathode is found to be around 10% after 70 h of AST as against 77% for Pt/Non-GrC cathode. It is noteworthy that Pt/GrC cathodes can withstand even up to 100 h of AST with nominal effect on their performance. Xray diffraction (XRD), Raman spectroscopy, transmission electron microscopy and cross-sectional field-emission scanning electron microscopy (FE-SEM) studies before and after AST suggest lesser deformation in catalyst layer and catalyst particles for Pt/GrC cathodes in relation to Pt/Non-GrC cathodes, reflecting that graphitic carbon-support resists carbon corrosion and helps mitigating aggregation of Pt-particles.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper a new parallel algorithm for nonlinear transient dynamic analysis of large structures has been presented. An unconditionally stable Newmark-beta method (constant average acceleration technique) has been employed for time integration. The proposed parallel algorithm has been devised within the broad framework of domain decomposition techniques. However, unlike most of the existing parallel algorithms (devised for structural dynamic applications) which are basically derived using nonoverlapped domains, the proposed algorithm uses overlapped domains. The parallel overlapped domain decomposition algorithm proposed in this paper has been formulated by splitting the mass, damping and stiffness matrices arises out of finite element discretisation of a given structure. A predictor-corrector scheme has been formulated for iteratively improving the solution in each step. A computer program based on the proposed algorithm has been developed and implemented with message passing interface as software development environment. PARAM-10000 MIMD parallel computer has been used to evaluate the performances. Numerical experiments have been conducted to validate as well as to evaluate the performance of the proposed parallel algorithm. Comparisons have been made with the conventional nonoverlapped domain decomposition algorithms. Numerical studies indicate that the proposed algorithm is superior in performance to the conventional domain decomposition algorithms. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Very Long Instruction Word (VLIW) architectures exploit instruction level parallelism (ILP) with the help of the compiler to achieve higher instruction throughput with minimal hardware. However, control and data dependencies between operations limit the available ILP, which not only hinders the scalability of VLIW architectures, but also result in code size expansion. Although speculation and predicated execution mitigate ILP limitations due to control dependencies to a certain extent, they increase hardware cost and exacerbate code size expansion. Simultaneous multistreaming (SMS) can significantly improve operation throughput by allowing interleaved execution of operations from multiple instruction streams. In this paper we study SMS for VLIW architectures and quantify the benefits associated with it using a case study of the MPEG-2 video decoder. We also propose the notion of virtual resources for VLIW architectures, which decouple architectural resources (resources exposed to the compiler) from the microarchitectural resources, to limit code size expansion. Our results for a VLIW architecture demonstrate that: (1) SMS delivers much higher throughput than that achieved by speculation and predicated execution, (2) the increase in performance due to the addition of speculation and predicated execution support over SMS averages around 12%. The minor increase in performance might not warrant the additional hardware complexity involved, and (3) the notion of virtual resources is very effective in reducing no-operations (NOPs) and consequently reduce code size with little or no impact on performance.