448 resultados para tp-Kadec Norm
Resumo:
Image fusion is a formal framework which is expressed as means and tools for the alliance of multisensor, multitemporal, and multiresolution data. Multisource data vary in spectral, spatial and temporal resolutions necessitating advanced analytical or numerical techniques for enhanced interpretation capabilities. This paper reviews seven pixel based image fusion techniques - intensity-hue-saturation, brovey, high pass filter (HPF), high pass modulation (HPM), principal component analysis, fourier transform and correspondence analysis.Validation of these techniques on IKONOS data (Panchromatic band at I m spatial resolution and Multispectral 4 bands at 4 in spatial resolution) reveal that HPF and HPM methods synthesises the images closest to those the corresponding multisensors would observe at the high resolution level.
Resumo:
We present a case study of formal verification of full-wave rectifier for analog and mixed signal designs. We have used the Checkmate tool from CMU [1], which is a public domain formal verification tool for hybrid systems. Due to the restriction imposed by Checkmate it necessitates to make the changes in the Checkmate implementation to implement the complex and non-linear system. Full-wave rectifier has been implemented by using the Checkmate custom blocks and the Simulink blocks from MATLAB from Math works. After establishing the required changes in the Checkmate implementation we are able to efficiently verify, the safety properties of the full-wave rectifier.
Resumo:
In this paper, the effects of energy quantization on different single-electron transistor (SET) circuits (logic inverter, current-biased circuits, and hybrid MOS-SET circuits) are analyzed through analytical modeling and Monte Carlo simulations. It is shown that energy quantizationmainly increases the Coulomb blockade area and Coulomb blockade oscillation periodicity, and thus, affects the SET circuit performance. A new model for the noise margin of the SET inverter is proposed, which includes the energy quantization effects. Using the noise margin as a metric, the robustness of the SET inverter is studied against the effects of energy quantization. An analytical expression is developed, which explicitly defines the maximum energy quantization (termed as ``quantization threshold'') that an SET inverter can withstand before its noise margin falls below a specified tolerance level. The effects of energy quantization are further studiedfor the current-biased negative differential resistance (NDR) circuitand hybrid SETMOS circuit. A new model for the conductance of NDR characteristics is also formulated that explains the energy quantization effects.
Resumo:
In this paper an attempt is made to study accurately, the field distribution for various types of porcelain/ceramic insulators used forhigh voltage transmission. The surface charge Simulation method is employed for the field computation. Novel field reduction electrodes are developed to reduce the maximum field around the pin region. In order to experimentally scrutinize the performance of discs with field reduction electrodes, special artificial pollution test facility was built and utilized. The experimental results show better improvement in the pollution flashover performance of string insulators.
Resumo:
The Ball-Larus path-profiling algorithm is an efficient technique to collect acyclic path frequencies of a program. However, longer paths -those extending across loop iterations - describe the runtime behaviour of programs better. We generalize the Ball-Larus profiling algorithm for profiling k-iteration paths - paths that can span up to to k iterations of a loop. We show that it is possible to number suchk-iteration paths perfectly, thus allowing for an efficient profiling algorithm for such longer paths. We also describe a scheme for mixed-mode profiling: profiling different parts of a procedure with different path lengths. Experimental results show that k-iteration profiling is realistic.
Resumo:
The StreamIt programming model has been proposed to exploit parallelism in streaming applications on general purpose multi-core architectures. This model allows programmers to specify the structure of a program as a set of filters that act upon data, and a set of communication channels between them. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on modern Graphics Processing Units (GPUs), as they support abundant parallelism in hardware. In this paper, we describe the challenges in mapping StreamIt to GPUs and propose an efficient technique to software pipeline the execution of stream programs on GPUs. We formulate this problem - both scheduling and assignment of filters to processors - as an efficient Integer Linear Program (ILP), which is then solved using ILP solvers. We also describe a novel buffer layout technique for GPUs which facilitates exploiting the high memory bandwidth available in GPUs. The proposed scheduling utilizes both the scalar units in GPU, to exploit data parallelism, and multiprocessors, to exploit task and pipelin parallelism. Further it takes into consideration the synchronization and bandwidth limitations of GPUs, and yields speedups between 1.87X and 36.83X over a single threaded CPU.
Resumo:
Apart from their intrinsic physical interest, spin-polarized many-body effects are expected to be important to the working of spintronic devices. A vast literature exists on the effects of a spin-unpolarized electron-hole plasma on the optical properties of a semiconductor. Here, we include the spin degree of freedom to model optical absorption of circularly polarized light by spin-polarized bulk GaAs. Our model is easy to implement and does not require elaborate numerics, since it is based on the closed-form analytical pair-equation formula that is valid in 3d. The efficacy of our approach is demonstrated by a comparison with recent experimental data.
Resumo:
This paper proposes the use of empirical modeling techniques for building microarchitecture sensitive models for compiler optimizations. The models we build relate program performance to settings of compiler optimization flags, associated heuristics and key microarchitectural parameters. Unlike traditional analytical modeling methods, this relationship is learned entirely from data obtained by measuring performance at a small number of carefully selected compiler/microarchitecture configurations. We evaluate three different learning techniques in this context viz. linear regression, adaptive regression splines and radial basis function networks. We use the generated models to a) predict program performance at arbitrary compiler/microarchitecture configurations, b) quantify the significance of complex interactions between optimizations and the microarchitecture, and c) efficiently search for'optimal' settings of optimization flags and heuristics for any given microarchitectural configuration. Our evaluation using benchmarks from the SPEC CPU2000 suits suggests that accurate models (< 5% average error in prediction) can be generated using a reasonable number of simulations. We also find that using compiler settings prescribed by a model-based search can improve program performance by as much as 19% (with an average of 9.5%) over highly optimized binaries.
Resumo:
Relay selection for cooperative communications promises significant performance improvements, and is, therefore, attracting considerable attention. While several criteria have been proposed for selecting one or more relays, distributed mechanisms that perform the selection have received relatively less attention. In this paper, we develop a novel, yet simple, asymptotic analysis of a splitting-based multiple access selection algorithm to find the single best relay. The analysis leads to simpler and alternate expressions for the average number of slots required to find the best user. By introducing a new contention load' parameter, the analysis shows that the parameter settings used in the existing literature can be improved upon. New and simple bounds are also derived. Furthermore, we propose a new algorithm that addresses the general problem of selecting the best Q >= 1 relays, and analyze and optimize it. Even for a large number of relays, the scalable algorithm selects the best two relays within 4.406 slots and the best three within 6.491 slots, on average. We also propose a new and simple scheme for the practically relevant case of discrete metrics. Altogether, our results develop a unifying perspective about the general problem of distributed selection in cooperative systems and several other multi-node systems.
Resumo:
A torsional MEMS varactor with wide dynamic range, lower actuation voltage and isolation between actuation voltage and signal voltage has been proposed in C. Venkatesh et al. (2005). In this paper we address the effects of pull-in, residual stress and continuous cycling on the performance of torsional MEMS varactor.
Resumo:
Modern database systems incorporate a query optimizer to identify the most efficient "query execution plan" for executing the declarative SQL queries submitted by users. A dynamic-programming-based approach is used to exhaustively enumerate the combinatorially large search space of plan alternatives and, using a cost model, to identify the optimal choice. While dynamic programming (DP) works very well for moderately complex queries with up to around a dozen base relations, it usually fails to scale beyond this stage due to its inherent exponential space and time complexity. Therefore, DP becomes practically infeasible for complex queries with a large number of base relations, such as those found in current decision-support and enterprise management applications. To address the above problem, a variety of approaches have been proposed in the literature. Some completely jettison the DP approach and resort to alternative techniques such as randomized algorithms, whereas others have retained DP by using heuristics to prune the search space to computationally manageable levels. In the latter class, a well-known strategy is "iterative dynamic programming" (IDP) wherein DP is employed bottom-up until it hits its feasibility limit, and then iteratively restarted with a significantly reduced subset of the execution plans currently under consideration. The experimental evaluation of IDP indicated that by appropriate choice of algorithmic parameters, it was possible to almost always obtain "good" (within a factor of twice of the optimal) plans, and in the few remaining cases, mostly "acceptable" (within an order of magnitude of the optimal) plans, and rarely, a "bad" plan. While IDP is certainly an innovative and powerful approach, we have found that there are a variety of common query frameworks wherein it can fail to consistently produce good plans, let alone the optimal choice. This is especially so when star or clique components are present, increasing the complexity of th- e join graphs. Worse, this shortcoming is exacerbated when the number of relations participating in the query is scaled upwards.
Resumo:
This paper proposes a new approach, wherein multiple populations are evolved on different landscapes. The problem statement is broken down, to describe discrete characteristics. Each landscape, described by its fitness landscape is used to optimize or amplify a certain characteristic or set of characteristics. Individuals from each of these populations are kept geographically isolated from each other Each population is evolved individually. After a predetermined number of evolutions, the system of populations is analysed against a normalized fitness function. Depending on this score and a predefined merging scheme, the populations are merged, one at a time, while continuing evolution. Merging continues until only one final population remains. This population is then evolved, following which the resulting population will contain the optimal solution. The final resulting population will contain individuals which have been optimized against all characteristics as desired by the problem statement. Each individual population is optimized for a local maxima. Thus when populations are merged, the effect is to produce a new population which is closer to the global maxima.
Resumo:
This paper proposes a new approach, wherein multiple populations are evolved on different landscapes. The problem statement is broken down, to describe discrete characteristics. Each landscape, described by its fitness landscape is used to optimize or amplify a certain characteristic or set of characteristics. Individuals from each of these populations are kept geographically isolated from each other Each population is evolved individually. After a predetermined number of evolutions, the system of populations is analysed against a normalized fitness function. Depending on this score and a predefined merging scheme, the populations are merged, one at a time, while continuing evolution. Merging continues until only one final population remains. This population is then evolved, following which the resulting population will contain the optimal solution. The final resulting population will contain individuals which have been optimized against all characteristics as desired by the problem statement. Each individual population is optimized for a local maxima. Thus when populations are merged, the effect is to produce a new population which is closer to the global maxima.
Resumo:
Micromachined antennas are recieving great interest as carrier frequencies move higher into the frequency spectrum due to their superior performance and amenability for integration with active devices. However their design is cumbersome owing to the complexity of the structure. To overcome this, in this paper, an iterative procedure is suggested to facilitate fast design of micromachined patch antennas based on a simulation study. A microstrip line on a micromachined Silicon substrate is simulated in a full wave simulator by solving for the ports only. From the obtained propagation constant, the effective dilectric constant for the micromachined substrate is estimated. The process is repeated for a number of values of the width of the microstrip and a plot is made for the variation of the effective dielectric constant with the microstrip width. Then an iterative method in combination with the extrapolated permittivity which includes the effect of cavity extensions in all the directions, is used to obtain the width and the corresponding effective dielectric constant. This method has been verified to be quite accurate by comparison with full wave simulations and hence it can function as a good starting point for designers to design micromachined antennas.
Resumo:
The Orthogonal Frequency Division Multiplexing (OFDM) is a form of Multi-Carrier Modulation where the data stream is transmitted over a number of carriers which are orthogonal to each other i.e. the carrier spacing is selected such that each carrier is located at the zeroes of all other carriers in the spectral domain. This paper proposes a new novel iterative frequency offset estimation algorithm for an OFDM system in order to receive the OFDM data symbols error-free over the noisy channel at the receiver and to achieve frequency synchronization between the transmitter and the receiver. The performance of this algorithm has been studied in AWGN, ADSL and SUI channels successfully.