27 resultados para Agosti, Orlando Ramón

em Indian Institute of Science - Bangalore - Índia


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a multilevel inverter which produces hexagonal voltage space vector structure in lower modulation region and a 12-sided polygonal space vector structure in the over-modulation region. Normal conventional multilevel inverter produces 6n +/- 1 (n=odd) harmonics in the phase voltage during over-modulation and in the extreme square wave mode operation. However, this inverter produces a 12-sided polygonal space vector location leading to the elimination of 6n 1 (n=odd) harmonics in over-modulation region extending to a final 12-step mode operation. The inverter consists of three conventional cascaded two level inverters with asymmetric dc bus voltages. The switching frequency of individual inverters is kept low throughout the modulation index. In the low speed region, hexagonal space phasor based PWM scheme and in the higher modulation region, 12-sided polygonal voltage space vector structure is used. Experimental results presented in this paper shows that the proposed converter is suitable for high power applications because of low harmonic distortion and low switching losses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we propose a novel family of kernels for multivariate time-series classification problems. Each time-series is approximated by a linear combination of piecewise polynomial functions in a Reproducing Kernel Hilbert Space by a novel kernel interpolation technique. Using the associated kernel function a large margin classification formulation is proposed which can discriminate between two classes. The formulation leads to kernels, between two multivariate time-series, which can be efficiently computed. The kernels have been successfully applied to writer independent handwritten character recognition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The MIT Lincoln Laboratory IDS evaluation methodology is a practical solution in terms of evaluating the performance of Intrusion Detection Systems, which has contributed tremendously to the research progress in that field. The DARPA IDS evaluation dataset has been criticized and considered by many as a very outdated dataset, unable to accommodate the latest trend in attacks. Then naturally the question arises as to whether the detection systems have improved beyond detecting these old level of attacks. If not, is it worth thinking of this dataset as obsolete? The paper presented here tries to provide supporting facts for the use of the DARPA IDS evaluation dataset. The two commonly used signature-based IDSs, Snort and Cisco IDS, and two anomaly detectors, the PHAD and the ALAD, are made use of for this evaluation purpose and the results support the usefulness of DARPA dataset for IDS evaluation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The motivation behind the fusion of Intrusion Detection Systems was the realization that with the increasing traffic and increasing complexity of attacks, none of the present day stand-alone Intrusion Detection Systems can meet the high demand for a very high detection rate and an extremely low false positive rate. Multi-sensor fusion can be used to meet these requirements by a refinement of the combined response of different Intrusion Detection Systems. In this paper, we show the design technique of sensor fusion to best utilize the useful response from multiple sensors by an appropriate adjustment of the fusion threshold. The threshold is generally chosen according to the past experiences or by an expert system. In this paper, we show that the choice of the threshold bounds according to the Chebyshev inequality principle performs better. This approach also helps to solve the problem of scalability and has the advantage of failsafe capability. This paper theoretically models the fusion of Intrusion Detection Systems for the purpose of proving the improvement in performance, supplemented with the empirical evaluation. The combination of complementary sensors is shown to detect more attacks than the individual components. Since the individual sensors chosen detect sufficiently different attacks, their result can be merged for improved performance. The combination is done in different ways like (i) taking all the alarms from each system and avoiding duplications, (ii) taking alarms from each system by fixing threshold bounds, and (iii) rule-based fusion with a priori knowledge of the individual sensor performance. A number of evaluation metrics are used, and the results indicate that there is an overall enhancement in the performance of the combined detector using sensor fusion incorporating the threshold bounds and significantly better performance using simple rule-based fusion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

CMPs enable simultaneous execution of multiple applications on the same platforms that share cache resources. Diversity in the cache access patterns of these simultaneously executing applications can potentially trigger inter-application interference, leading to cache pollution. Whereas a large cache can ameliorate this problem, the issues of larger power consumption with increasing cache size, amplified at sub-100nm technologies, makes this solution prohibitive. In this paper in order to address the issues relating to power-aware performance of caches, we propose a caching structure that addresses the following: 1. Definition of application-specific cache partitions as an aggregation of caching units (molecules). The parameters of each molecule namely size, associativity and line size are chosen so that the power consumed by it and access time are optimal for the given technology. 2. Application-Specific resizing of cache partitions with variable and adaptive associativity per cache line, way size and variable line size. 3. A replacement policy that is transparent to the partition in terms of size, heterogeneity in associativity and line size. Through simulation studies we establish the superiority of molecular cache (caches built as aggregations of molecules) that offers a 29% power advantage over that of an equivalently performing traditional cache.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a glowworm metaphor based distributed algorithm that enables a collection of minimalist mobile robots to split into subgroups, exhibit simultaneous taxis-behavior towards, and rendezvous at multiple radiation sources such as nuclear/hazardous chemical spills and fire-origins in a fire calamity. The algorithm is based on a glowworm swarm optimization (GSO) technique that finds multiple optima of multimodal functions. The algorithm is in the same spirit as the ant-colony optimization (ACO) algorithms, but with several significant differences. The agents in the glowworm algorithm carry a luminescence quantity called luciferin along with them. Agents are thought of as glowworms that emit a light whose intensity is proportional to the associated luciferin. The key feature that is responsible for the working of the algorithm is the use of an adaptive local-decision domain, which we use effectively to detect the multiple source locations of interest. The glowworms have a finite sensor range which defines a hard limit on the local-decision domain used to compute their movements. Extensive simulations validate the feasibility of applying the glowworm algorithm to the problem of multiple source localization. We build four wheeled robots called glowworms to conduct our experiments. We use a preliminary experiment to demonstrate the basic behavioral primitives that enable each glowworm to exhibit taxis behavior towards source locations and later demonstrate a sound localization task using a set of four glowworms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel detection technique to estimate the amount of chirp in fiber Bragg gratings (FBGs) is proposed. This method is based on the fact that reflectivity at central wavelength of FBG reflection changes with strain/temperature gradient (linear chirp) applied to the same. Transfer matrix approach was used to vary different grating parameters (length, strength and apodization) to optimize variation of reflectivity with linear chirp. Analysis is done for different sets of `FBG length-refractive index strength' combinations for which reflectivity vary linearly with linear chirp over a decent measurement range. This article acts as a guideline to choose appropriate grating parameters in designing sensing apparatus based on change in reflectivity at central wavelength of FBG reflection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel high sensitive fiber Bragg grating (FBG) strain sensing technique using lasers locked to relative frequency reference is proposed and analyzed theoretically. Static strain on FBG independent of temperature can be measured by locking frequency of diode laser to the mid reflection frequency of matched reference FBG, which responds to temperature similar to that of the sensor FBG, but is immune to strain applied to the same. Difference between light intensities reflected from the sensor and reference FBGs (proportional to the difference between respective pass band gains at the diode laser frequency) is not only proportional to the relative strain between the sensor and reference FBGs but also independent of servo residual frequency errors. Usage of relative frequency reference avoids all complexities involved in the usage of absolute frequency reference, hence, making the system simple and economical. Theoretical limit for dynamic and static strain sensitivities considering all major noise contributions are of the order of 25 (p epsilon) / root Hz and 1.2 n epsilon / root Hz respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A connectionist approach for global optimization is proposed. The standard function set is tested. Results obtained, in the case of large scale problems, indicate excellent scalability of the proposed approach

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Designing and optimizing high performance microprocessors is an increasingly difficult task due to the size and complexity of the processor design space, high cost of detailed simulation and several constraints that a processor design must satisfy. In this paper, we propose the use of empirical non-linear modeling techniques to assist processor architects in making design decisions and resolving complex trade-offs. We propose a procedure for building accurate non-linear models that consists of the following steps: (i) selection of a small set of representative design points spread across processor design space using latin hypercube sampling, (ii) obtaining performance measures at the selected design points using detailed simulation, (iii) building non-linear models for performance using the function approximation capabilities of radial basis function networks, and (iv) validating the models using an independently and randomly generated set of design points. We evaluate our model building procedure by constructing non-linear performance models for programs from the SPEC CPU2000 benchmark suite with a microarchitectural design space that consists of 9 key parameters. Our results show that the models, built using a relatively small number of simulations, achieve high prediction accuracy (only 2.8% error in CPI estimates on average) across a large processor design space. Our models can potentially replace detailed simulation for common tasks such as the analysis of key microarchitectural trends or searches for optimal processor design points.