912 resultados para interpretative flexibility
Resumo:
This paper formulates the automatic generation control (AGC) problem as a stochastic multistage decision problem. A strategy for solving this new AGC problem formulation is presented by using a reinforcement learning (RL) approach This method of obtaining an AGC controller does not depend on any knowledge of the system model and more importantly it admits considerable flexibility in defining the control objective. Two specific RL based AGC algorithms are presented. The first algorithm uses the traditional control objective of limiting area control error (ACE) excursions, where as, in the second algorithm, the controller can restore the load-generation balance by only monitoring deviation in tie line flows and system frequency and it does not need to know or estimate the composite ACE signal as is done by all current approaches. The effectiveness and versatility of the approaches has been demonstrated using a two area AGC model. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Full Paper: The copolyperoxides of various compositions of indene with methyl acrylate, ethyl acrylate and butyl acrylate have been synthesized by the free-radical-initiated oxidative copolymerization. The compositions of copolyperoxide obtained from H-1 and C-13 NMR spectra have been used to determine the reactivity ratios of the monomers. The copolyperoxides contain a greater proportion of the indene units in random placement. The NMR studies have shown irregularities in the copolyperoxide chain due to the cleavage reactions of the propagating peroxide radical. The thermal analysis by differential scanning calorimetry suggests alternating peroxide units in the copolyperoxide chain. From the activation energy for the thermal degradation, it was inferred that degradation occurs via the dissociation of the peroxide (O-O) bonds of the copolyperoxide chain. The flexibility of the polyperoxides in terms of glass transition temperature (T-g) has also been examined.
Resumo:
The copolyperoxides of indene with methyl methacrylate and methacrylonitrile have been synthesized by the free-radical-initiated oxidative copolymerization of indene and the monomers. The compositions of copolyperoxides, obtained from H-1 and C-13 NMR spectra, have been utilized to determine the reactivity ratios. The reactivity ratios indicate that the copolyperoxides contain a large proportion of the indene units in random placement. Thermal degradation studies of the copolyperoxides by differential scanning calorimetry and electron-impact mass spectroscopy support alternating peroxide units in the copolyperoxide chain. The energy of activation for thermal degradation suggests that the degradation is controlled by the dissociation of the peroxide (-O-O-) bonds in the copolyperoxide chain. The flexibility of copolyperoxide in terms of glass transition temperature (T-g) has also been examined. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Fracture toughness and fracture mechanisms in Al2O3/Al composites are described. The unique flexibility offered by pressureless infiltration of molten Al alloys into porous alumina preforms was utilized to investigate the effect of microstructural scale and matrix properties on the fracture toughness and the shape of the crack resistance curves (R-curves). The results indicate that the observed increment in toughness is due to crack bridging by intact matrix ligaments behind the crack tip. The deformation behavior of the matrix, which is shown to be dependent on the microstructural constraints, is the key parameter that influences both the steady-state toughness and the shape of the R-curves. Previously proposed models based on crack bridging by intact ductile particles in a ceramic matrix have been modified by the inclusion of an experimentally determined plastic constraint factor (P) that determines the deformation of the ductile phase and are shown to be adequate in predicting the toughness increment in the composites. Micromechanical models to predict the crack tip profile and the bridge lengths (L) correlate well with the observed behavior and indicate that the composites can be classified as (i) short-range toughened and (ii) long-range toughened on the basis of their microstructural characteristics.
Resumo:
In data mining, an important goal is to generate an abstraction of the data. Such an abstraction helps in reducing the space and search time requirements of the overall decision making process. Further, it is important that the abstraction is generated from the data with a small number of disk scans. We propose a novel data structure, pattern count tree (PC-tree), that can be built by scanning the database only once. PC-tree is a minimal size complete representation of the data and it can be used to represent dynamic databases with the help of knowledge that is either static or changing. We show that further compactness can be achieved by constructing the PC-tree on segmented patterns. We exploit the flexibility offered by rough sets to realize a rough PC-tree and use it for efficient and effective rough classification. To be consistent with the sizes of the branches of the PC-tree, we use upper and lower approximations of feature sets in a manner different from the conventional rough set theory. We conducted experiments using the proposed classification scheme on a large-scale hand-written digit data set. We use the experimental results to establish the efficacy of the proposed approach. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
We present insightful results on the kinetics of photodarkening (PD) in Ge(x)As(45-x)Se(55) glasses at the ambient and liquid helium temperatures when the network rigidity is increased by varying x from 0 to 16. We observe a many fold change in PD and its kinetics with decreasing network flexibility and temperature. Moreover, temporal evolution of PD shows a dramatic change with increasing x. (C)2011 Optical Society of America
Resumo:
Conventional hardware implementation techniques for FIR filters require the computation of filter coefficients in software and have them stored in memory. This approach is static in the sense that any further fine tuning of the filter requires computation of new coefficients in software. In this paper, we propose an alternate technique for implementing FIR filters in hardware. We store a considerably large number of impulse response coefficients of the ideal filter (having box type frequency response) in memory. We then do the windowing process, on these coefficients, in hardware using integer sequences as window functions. The integer sequences are also generated in hardware. This approach offers the flexibility in fine tuning the filter, like varying the transition bandwidth around a particular cutoff frequency.
Resumo:
Multiple Clock Domain processors provide an attractive solution to the increasingly challenging problems of clock distribution and power dissipation. They allow their chips to be partitioned into different clock domains, and each domain’s frequency (voltage) to be independently configured. This flexibility adds new dimensions to the Dynamic Voltage and Frequency Scaling problem, while providing better scope for saving energy and meeting performance demands. In this paper, we propose a compiler directed approach for MCD-DVFS. We build a formal petri net based program performance model, parameterized by settings of microarchitectural components and resource configurations, and integrate it with our compiler passes for frequency selection.Our model estimates the performance impact of a frequency setting, unlike the existing best techniques which rely on weaker indicators of domain performance such as queue occupancies(used by online methods) and slack manifestation for a particular frequency setting (software based methods).We evaluate our method with subsets of SPECFP2000,Mediabench and Mibench benchmarks. Our mean energy savings is 60.39% (versus 33.91% of the best software technique)in a memory constrained system for cache miss dominated benchmarks, and we meet the performance demands.Our ED2 improves by 22.11% (versus 18.34%) for other benchmarks. For a CPU with restricted frequency settings, our energy consumption is within 4.69% of the optimal.
Resumo:
Over past few years, the studies of cultured neuronal networks have opened up avenues for understanding the ion channels, receptor molecules, and synaptic plasticity that may form the basis of learning and memory. The hippocampal neurons from rats are dissociated and cultured on a surface containing a grid of 64 electrodes. The signals from these 64 electrodes are acquired using a fast data acquisition system MED64 (Alpha MED Sciences, Japan) at a sampling rate of 20 K samples with a precision of 16-bits per sample. A few minutes of acquired data runs in to a few hundreds of Mega Bytes. The data processing for the neural analysis is highly compute-intensive because the volume of data is huge. The major processing requirements are noise removal, pattern recovery, pattern matching, clustering and so on. In order to interface a neuronal colony to a physical world, these computations need to be performed in real-time. A single processor such as a desk top computer may not be adequate to meet this computational requirements. Parallel computing is a method used to satisfy the real-time computational requirements of a neuronal system that interacts with an external world while increasing the flexibility and scalability of the application. In this work, we developed a parallel neuronal system using a multi-node Digital Signal processing system. With 8 processors, the system is able to compute and map incoming signals segmented over a period of 200 ms in to an action in a trained cluster system in real time.
Resumo:
In this paper we explore an implementation of a high-throughput, streaming application on REDEFINE-v2, which is an enhancement of REDEFINE. REDEFINE is a polymorphic ASIC combining the flexibility of a programmable solution with the execution speed of an ASIC. In REDEFINE Compute Elements are arranged in an 8x8 grid connected via a Network on Chip (NoC) called RECONNECT, to realize the various macrofunctional blocks of an equivalent ASIC. For a 1024-FFT we carry out an application-architecture design space exploration by examining the various characterizations of Compute Elements in terms of the size of the instruction store. We further study the impact by using application specific, vectorized FUs. By setting up different partitions of the FFT algorithm for persistent execution on REDEFINE-v2, we derive the benefits of setting up pipelined execution for higher performance. The impact of the REDEFINE-v2 micro-architecture for any arbitrary N-point FFT (N > 4096) FFT is also analyzed. We report the various algorithm-architecture tradeoffs in terms of area and execution speed with that of an ASIC implementation. In addition we compare the performance gain with respect to a GPP.
Resumo:
An elementary combinatorial Tanner graph construction for a family of near-regular low density parity check (LDPC) codes achieving high girth is presented. These codes are near regular in the sense that the degree of a left/right vertex is allowed to differ by at most one from the average. The construction yields in quadratic time complexity an asymptotic code family with provable lower bounds on the rate and the girth for a given choice of block length and average degree. The construction gives flexibility in the choice of design parameters of the code like rate, girth and average degree. Performance simulations of iterative decoding algorithm for the AWGN channel on codes designed using the method demonstrate that these codes perform better than regular PEG codes and MacKay codes of similar length for all values of Signal to noise ratio.
Resumo:
Packet forwarding is a memory-intensive application requiring multiple accesses through a trie structure. The efficiency of a cache for this application critically depends on the placement function to reduce conflict misses. Traditional placement functions use a one-level mapping that naively partitions trie-nodes into cache sets. However, as a significant percentage of trie nodes are not useful, these schemes suffer from a non-uniform distribution of useful nodes to sets. This in turn results in increased conflict misses. Newer organizations such as variable associativity caches achieve flexibility in placement at the expense of increased hit-latency. This makes them unsuitable for L1 caches.We propose a novel two-level mapping framework that retains the hit-latency of one-level mapping yet incurs fewer conflict misses. This is achieved by introducing a secondlevel mapping which reorganizes the nodes in the naive initial partitions into refined partitions with near-uniform distribution of nodes. Further as this remapping is accomplished by simply adapting the index bits to a given routing table the hit-latency is not affected. We propose three new schemes which result in up to 16% reduction in the number of misses and 13% speedup in memory access time. In comparison, an XOR-based placement scheme known to perform extremely well for general purpose architectures, can obtain up to 2% speedup in memory access time.
Resumo:
The problem of finding optimal parameterized feedback policies for dynamic bandwidth allocation in communication networks is studied. We consider a queueing model with two queues to which traffic from different competing flows arrive. The queue length at the buffers is observed every T instants of time, on the basis of which a decision on the amount of bandwidth to be allocated to each buffer for the next T instants is made. We consider two different classes of multilevel closed-loop feedback policies for the system and use a two-timescale simultaneous perturbation stochastic approximation (SPSA) algorithm to find optimal policies within each prescribed class. We study the performance of the proposed algorithm on a numerical setting and show performance comparisons of the two optimal multilevel closedloop policies with optimal open loop policies. We observe that closed loop policies of Class B that tune parameters for both the queues and do not have the constraint that the entire bandwidth be used at each instant exhibit the best results overall as they offer greater flexibility in parameter tuning. Index Terms — Resource allocation, dynamic bandwidth allocation in communication networks, two-timescale SPSA algorithm, optimal parameterized policies. I.
Resumo:
Dimeric banana lectin and calsepa, tetrameric artocarpin and octameric heltuba are mannose-specific beta-prism I fold lectins of nearly the same tertiary structure. MD simulations on individual subunits and the oligomers provide insights into the changes in the structure brought about in the protomers on oligomerization, including swapping of the N-terminal stretch in one instance. The regions that undergo changes also tend to exhibit dynamic flexibility during MD simulations. The internal symmetries of individual oligomers are substantially retained during the calculations. Energy minimization and simulations were also carried out on models using all possible oligomers by employing the four different protomers. The unique dimerization pattern observed in calsepa could be traced to unique substitutions in a peptide stretch involved in dimerization. The impossibility of a specific mode of oligomerization involving a particular protomer is often expressed in terms of unacceptable steric contacts or dissociation of the oligomer during simulations. The calculations also led to a rationale for the observation of a heltuba tetramer in solution although the lectin exists as an octamer in the crystal, in addition to providing insights into relations among evolution, oligomerization and ligand binding.
Resumo:
The crystal structure of Rv0098, a long-chain fatty acyl-CoA thioesterase from Mycobacterium tuberculosis with bound dodecanoic acid at the active site provided insights into the mode of substrate binding but did not reveal the structural basis of substrate specificities of varying chain length. Molecular dynamics studies demonstrated that certain residues of the substrate binding tunnel are flexible and thus modulate the length of the tunnel. The flexibility of the loop at the base of the tunnel was also found to be important for determining the length of the tunnel for accommodating appropriate substrates. A combination of crystallographic and molecular dynamics studies thus explained the structural basis of accommodating long chain substrates by Rv0098 of M. tuberculosis.