165 resultados para DYNAMIC PORTFOLIO SELECTION
Resumo:
A generalized technique is proposed for modeling the effects of process variations on dynamic power by directly relating the variations in process parameters to variations in dynamic power of a digital circuit. The dynamic power of a 2-input NAND gate is characterized by mixed-mode simulations, to be used as a library element for 65mn gate length technology. The proposed methodology is demonstrated with a multiplier circuit built using the NAND gate library, by characterizing its dynamic power through Monte Carlo analysis. The statistical technique of Response. Surface Methodology (RSM) using Design of Experiments (DOE) and Least Squares Method (LSM), are employed to generate a "hybrid model" for gate power to account for simultaneous variations in multiple process parameters. We demonstrate that our hybrid model based statistical design approach results in considerable savings in the power budget of low power CMOS designs with an error of less than 1%, with significant reductions in uncertainty by atleast 6X on a normalized basis, against worst case design.
Resumo:
A methodology for reliability based optimum design of reinforced soil structures subjected to horizontal and vertical sinusoidal excitation based on pseudo-dynamic approach is presented. The tensile strength of reinforcement required to maintain the stability is computed using logarithmic spiral failure mechanism. The backfill soil properties, geometric and strength properties of reinforcement are treated as random variables. Effects of parameters like soil friction angle, horizontal and vertical seismic accelerations, shear and primary wave velocities, amplification factors for seismic acceleration on the component and system probability of failures in relation to tension and pullout capacities of reinforcement have been discussed. In order to evaluate the validity of the present formulation, static and seismic reinforcement force coefficients computed by the present method are compared with those given by other authors. The importance of the shear wave velocity in the estimation of the reliability of the structure is highlighted. The Ditlevsen's bounds of system probability of failure are also computed by taking into account the correlations between three failure modes, which is evaluated using the direction cosines of the tangent planes at the most probable points of failure. (c) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Folded Dynamic Programming (FDP) is adopted for developing optimalnreservoir operation policies for flood control. It is applied to a case study of Hirakud Reservoir in Mahanadi basin, India with the objective of deriving optimal policy for flood control. The river flows down to Naraj, the head of delta where a major city is located and finally joins the Bay of Bengal. As Hirakud reservoir is on the upstream side of delta area in the basin, it plays an important role in alleviating the severity of the flood for this area. Data of 68 floods such as peaks of inflow hydrograph, peak of outflow from reservoir during each flood, peak of flow hydrograph at Naraj and d/s catchment contribution are utilized. The combinations of 51, 54, 57 thousand cumecs as peak inflow into reservoir and 25.5, 20, 14 thousand cumecs respectively as,peak d/s catchment contribution form the critical combinations for flood situation. It is observed that the combination of 57 thousand cumecs of inflow into reservoir and 14 thousand cumecs for d/s catchment contribution is the most critical among the critical combinations of flow series. The method proposed can be extended to similar situations for deriving reservoir operating policies for flood control.
Resumo:
Relay selection for cooperative communications promises significant performance improvements, and is, therefore, attracting considerable attention. While several criteria have been proposed for selecting one or more relays, distributed mechanisms that perform the selection have received relatively less attention. In this paper, we develop a novel, yet simple, asymptotic analysis of a splitting-based multiple access selection algorithm to find the single best relay. The analysis leads to simpler and alternate expressions for the average number of slots required to find the best user. By introducing a new contention load' parameter, the analysis shows that the parameter settings used in the existing literature can be improved upon. New and simple bounds are also derived. Furthermore, we propose a new algorithm that addresses the general problem of selecting the best Q >= 1 relays, and analyze and optimize it. Even for a large number of relays, the scalable algorithm selects the best two relays within 4.406 slots and the best three within 6.491 slots, on average. We also propose a new and simple scheme for the practically relevant case of discrete metrics. Altogether, our results develop a unifying perspective about the general problem of distributed selection in cooperative systems and several other multi-node systems.
Resumo:
REDEFINE is a reconfigurable SoC architecture that provides a unique platform for high performance and low power computing by exploiting the synergistic interaction between coarse grain dynamic dataflow model of computation (to expose abundant parallelism in applications) and runtime composition of efficient compute structures (on the reconfigurable computation resources). We propose and study the throttling of execution in REDEFINE to maximize the architecture efficiency. A feature specific fast hybrid (mixed level) simulation framework for early in design phase study is developed and implemented to make the huge design space exploration practical. We do performance modeling in terms of selection of important performance criteria, ranking of the explored throttling schemes and investigate effectiveness of the design space exploration using statistical hypothesis testing. We find throttling schemes which give appreciable (24.8%) overall performance gain in the architecture and 37% resource usage gain in the throttling unit simultaneously.
Resumo:
The motivation behind the fusion of Intrusion Detection Systems was the realization that with the increasing traffic and increasing complexity of attacks, none of the present day stand-alone Intrusion Detection Systems can meet the high demand for a very high detection rate and an extremely low false positive rate. Multi-sensor fusion can be used to meet these requirements by a refinement of the combined response of different Intrusion Detection Systems. In this paper, we show the design technique of sensor fusion to best utilize the useful response from multiple sensors by an appropriate adjustment of the fusion threshold. The threshold is generally chosen according to the past experiences or by an expert system. In this paper, we show that the choice of the threshold bounds according to the Chebyshev inequality principle performs better. This approach also helps to solve the problem of scalability and has the advantage of failsafe capability. This paper theoretically models the fusion of Intrusion Detection Systems for the purpose of proving the improvement in performance, supplemented with the empirical evaluation. The combination of complementary sensors is shown to detect more attacks than the individual components. Since the individual sensors chosen detect sufficiently different attacks, their result can be merged for improved performance. The combination is done in different ways like (i) taking all the alarms from each system and avoiding duplications, (ii) taking alarms from each system by fixing threshold bounds, and (iii) rule-based fusion with a priori knowledge of the individual sensor performance. A number of evaluation metrics are used, and the results indicate that there is an overall enhancement in the performance of the combined detector using sensor fusion incorporating the threshold bounds and significantly better performance using simple rule-based fusion.
Resumo:
In this paper, we propose a self Adaptive Migration Model for Genetic Algorithms, where parameters of population size, the number of points of crossover and mutation rate for each population are fixed adaptively. Further, the migration of individuals between populations is decided dynamically. This paper gives a mathematical schema analysis of the method stating and showing that the algorithm exploits previously discovered knowledge for a more focused and concentrated search of heuristically high yielding regions while simultaneously performing a highly explorative search on the other regions of the search space. The effective performance of the algorithm is then shown using standard testbed functions, when compared with Island model GA(IGA) and Simple GA(SGA).
Resumo:
Guo and Nixon proposed a feature selection method based on maximizing I(x; Y),the multidimensional mutual information between feature vector x and class variable Y. Because computing I(x; Y) can be difficult in practice, Guo and Nixon proposed an approximation of I(x; Y) as the criterion for feature selection. We show that Guo and Nixon's criterion originates from approximating the joint probability distributions in I(x; Y) by second-order product distributions. We remark on the limitations of the approximation and discuss computationally attractive alternatives to compute I(x; Y).
Resumo:
The viscosity of five binary gas mixtures - namely, oxygen-hydrogen, oxygen-nitrogen, oxygen-carbon dioxide, carbon dioxide-nitrogen, carbon dioxide-hydrogen - and two ternary mixtures - oxygen-nitrogen-carbon dioxide and oxygen-hydrogen-carbon dioxide - were determined at ambient temperature and pressure using an oscillating disk viscometer. The theoretical expressions of several investigators were in good agreement with the experimental results obtained with this viscometer. In the case of the ternary gas mixture oxygen-carbon dioxide-nitrogen, as long as the volumetric ratio of oxygen to carbon dioxide in the mixture was maintained at 11 to 8, the viscosity of the ternary mixture at ambient temperature and pressure remained constant irrespective of the percentage of nitrogen present in the mixture.
Resumo:
In this paper, we propose a self Adaptive Migration Model for Genetic Algorithms, where parameters of population size, the number of points of crossover and mutation rate for each population are fixed adaptively. Further, the migration of individuals between populations is decided dynamically. This paper gives a mathematical schema analysis of the method stating and showing that the algorithm exploits previously discovered knowledge for a more focused and concentrated search of heuristically high yielding regions while simultaneously performing a highly explorative search on the other regions of the search space. The effective performance of the algorithm is then shown using standard testbed functions, when compared with Island model GA(IGA) and Simple GA(SGA).
Resumo:
We develop an alternate characterization of the statistical distribution of the inter-cell interference power observed in the uplink of CDMA systems. We show that the lognormal distribution better matches the cumulative distribution and complementary cumulative distribution functions of the uplink interference than the conventionally assumed Gaussian distribution and variants based on it. This is in spite of the fact that many users together contribute to uplink interference, with the number of users and their locations both being random. Our observations hold even in the presence of power control and cell selection, which have hitherto been used to justify the Gaussian distribution approximation. The parameters of the lognormal are obtained by matching moments, for which detailed analytical expressions that incorporate wireless propagation, cellular layout, power control, and cell selection parameters are developed. The moment-matched lognormal model, while not perfect, is an order of magnitude better in modeling the interference power distribution.
Resumo:
We propose and demonstrate a dynamic point spread function (PSF) for single and multiphoton fluorescence microscopy. The goal is to generate a PSF whose shape and size can be maneuvered from highly localized to elongated one, thereby allowing shallow-to-depth excitation capability during active imaging. The PSF is obtained by utilizing specially designed spatial filter and dynamically altering the filter parameters. We predict potential applications in nanobioimaging and fluorescence microscopy.
Resumo:
We propose and demonstrate a dynamic point spread function (PSF) for single and multiphoton fluorescence microscopy. The goal is to generate a PSF whose shape and size can be maneuvered from highly localized to elongated one, thereby allowing shallow-to-depth excitation capability during active imaging. The PSF is obtained by utilizing specially designed spatial filter and dynamically altering the filter parameters. We predict potential applications in nanobioimaging and fluorescence microscopy.
Resumo:
In this work a single edge notched plate (SEN(T)) subjected to a tensile stress pulse is analysed, using a 2D plane strain dynamic finite element procedure. The interaction of the notch with a pre-nucleated hole ahead of it is examined. The background material is modelled by the Gurson constitutive law and ductile failure by microvoid coalescence in the ligament connecting the notch and the hole is simulated. Both rate independent and rate dependent material behaviour is considered. The notch tip region is subjected to a range of loading rates j by varying the peak value and the rise time of the applied stress pulse. The results obtained from these simulations are compared with a three point bend (TPB) specimen subjected to impact loading analysed in an earlier work [3] The variation of J at fracture initiation, J(c), with average loading rate j is obtained from the finite element simulations. It is found that the functional relationship between J(c) and j is fairly independent of the specimen geometry and is only dependent on material behaviour.
Resumo:
The growth of characteristic length scales associated with dynamic heterogeneity in glass-forming liquids is investigated in an extensive computational study of a four-point, time-dependent structure factor defined from spatial correlations of mobility, for a model liquid for system sizes extending up to 351 232 particles, in constant-energy and constant-temperature ensembles. Our estimates for dynamic correlation lengths and susceptibilities are consistent with previous results from finite size scaling. We find scaling exponents that are inconsistent with predictions from inhomogeneous mode coupling theory and a recent simulation confirmation of these predictions.