140 resultados para Adaptive mesh refinements
Resumo:
Due to the importance of collective communications in scientific parallel applications, many strategies have been devised for optimizing collective communications for different kinds of parallel environments. There has been an increasing interest to evolve efficient broadcast algorithms for computational grids. In this paper, we present application-oriented adaptive techniques that take into account resource characteristics as well as the application's usage of broadcasts for deriving efficient broadcast trees. In particular, we consider two broadcast parameters used in the application, namely, the broadcast message sizes and the time interval between the broadcasts. The results indicate that our adaptive strategies can provide 20% average improvement in performance over the popular MPICH-G2's MPI_Bcast implementation for loaded network conditions.
Resumo:
We propose for the first time two reinforcement learning algorithms with function approximation for average cost adaptive control of traffic lights. One of these algorithms is a version of Q-learning with function approximation while the other is a policy gradient actor-critic algorithm that incorporates multi-timescale stochastic approximation. We show performance comparisons on various network settings of these algorithms with a range of fixed timing algorithms, as well as a Q-learning algorithm with full state representation that we also implement. We observe that whereas (as expected) on a two-junction corridor, the full state representation algorithm shows the best results, this algorithm is not implementable on larger road networks. The algorithm PG-AC-TLC that we propose is seen to show the best overall performance.
Resumo:
A class of model reference adaptive control system which make use of an augmented error signal has been introduced by Monopoli. Convergence problems in this attractive class of systems have been investigated in this paper using concepts from hyperstability theory. It is shown that the condition on the linear part of the system has to be stronger than the one given earlier. A boundedness condition on the input to the linear part of the system has been taken into account in the analysis - this condition appears to have been missed in the previous applications of hyperstability theory. Sufficient conditions for the convergence of the adaptive gain to the desired value are also given.
Resumo:
Long running multi-physics coupled parallel applications have gained prominence in recent years. The high computational requirements and long durations of simulations of these applications necessitate the use of multiple systems of a Grid for execution. In this paper, we have built an adaptive middleware framework for execution of long running multi-physics coupled applications across multiple batch systems of a Grid. Our framework, apart from coordinating the executions of the component jobs of an application on different batch systems, also automatically resubmits the jobs multiple times to the batch queues to continue and sustain long running executions. As the set of active batch systems available for execution changes, our framework performs migration and rescheduling of components using a robust rescheduling decision algorithm. We have used our framework for improving the application throughput of a foremost long running multi-component application for climate modeling, the Community Climate System Model (CCSM). Our real multi-site experiments with CCSM indicate that Grid executions can lead to improved application throughput for climate models.
Resumo:
Chronic recording of neural signals is indispensable in designing efficient brain–machine interfaces and to elucidate human neurophysiology. The advent of multichannel micro-electrode arrays has driven the need for electronics to record neural signals from many neurons. The dynamic range of the system can vary over time due to change in electrode–neuron distance and background noise. We propose a neural amplifier in UMC 130 nm, 1P8M complementary metal–oxide–semiconductor (CMOS) technology. It can be biased adaptively from 200 nA to 2 $mu{rm A}$, modulating input referred noise from 9.92 $mu{rm V}$ to 3.9 $mu{rm V}$. We also describe a low noise design technique which minimizes the noise contribution of the load circuitry. Optimum sizing of the input transistors minimizes the accentuation of the input referred noise of the amplifier and obviates the need of large input capacitance. The amplifier achieves a noise efficiency factor of 2.58. The amplifier can pass signal from 5 Hz to 7 kHz and the bandwidth of the amplifier can be tuned for rejecting low field potentials (LFP) and power line interference. The amplifier achieves a mid-band voltage gain of 37 dB. In vitro experiments are performed to validate the applicability of the neural low noise amplifier in neural recording systems.
Resumo:
High frequency PWM inverters produce an output voltage spectrum at the fundamental reference frequency and around the switching frequency. Thus ideally PWM inverters do not introduce any significant lower order harmonics. However, in real systems, due to dead-time effect, device drops and other non-idealities lower order harmonics are present. In order to attenuate these lower order harmonics and hence to improve the quality of output current, this paper presents an \emph{adaptive harmonic elimination technique}. This technique uses an adaptive filter to estimate a particular harmonic that is to be attenuated and generates a voltage reference which will be added to the voltage reference produced by the current control loop of the inverter. This would have an effect of cancelling the voltage that was producing the particular harmonic. The effectiveness and the limitations of the technique are verified experimentally in a single phase PWM inverter in stand-alone as well as g rid interactive modes of operation.
Resumo:
We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).
Resumo:
The diffusion equation-based modeling of near infrared light propagation in tissue is achieved by using finite-element mesh for imaging real-tissue types, such as breast and brain. The finite-element mesh size (number of nodes) dictates the parameter space in the optical tomographic imaging. Most commonly used finite-element meshing algorithms do not provide the flexibility of distinct nodal spacing in different regions of imaging domain to take the sensitivity of the problem into consideration. This study aims to present a computationally efficient mesh simplification method that can be used as a preprocessing step to iterative image reconstruction, where the finite-element mesh is simplified by using an edge collapsing algorithm to reduce the parameter space at regions where the sensitivity of the problem is relatively low. It is shown, using simulations and experimental phantom data for simple meshes/domains, that a significant reduction in parameter space could be achieved without compromising on the reconstructed image quality. The maximum errors observed by using the simplified meshes were less than 0.27% in the forward problem and 5% for inverse problem.
Resumo:
Exascale systems of the future are predicted to have mean time between failures (MTBF) of less than one hour. Malleable applications, where the number of processors on which the applications execute can be changed during executions, can make use of their malleability to better tolerate high failure rates. We present AdFT, an adaptive fault tolerance framework for long running malleable applications to maximize application performance in the presence of failures. AdFT framework includes cost models for evaluating the benefits of various fault tolerance actions including checkpointing, live-migration and rescheduling, and runtime decisions for dynamically selecting the fault tolerance actions at different points of application execution to maximize performance. Simulations with real and synthetic failure traces show that our approach outperforms existing fault tolerance mechanisms for malleable applications yielding up to 23% improvement in application performance, and is effective even for petascale systems and beyond.
Resumo:
In this study, the authors have investigated the likely future changes in the summer monsoon over the Western Ghats (WG) orographic region of India in response to global warming, using time-slice simulations of an ultra high-resolution global climate model and climate datasets of recent past. The model with approximately 20-km mesh horizontal resolution resolves orographic features on finer spatial scales leading to a quasi-realistic simulation of the spatial distribution of the present-day summer monsoon rainfall over India and trends in monsoon rainfall over the west coast of India. As a result, a higher degree of confidence appears to emerge in many aspects of the 20-km model simulation, and therefore, we can have better confidence in the validity of the model prediction of future changes in the climate over WG mountains. Our analysis suggests that the summer mean rainfall and the vertical velocities over the orographic regions of Western Ghats have significantly weakened during the recent past and the model simulates these features realistically in the present-day climate simulation. Under future climate scenario, by the end of the twenty-first century, the model projects reduced orographic precipitation over the narrow Western Ghats south of 16A degrees N that is found to be associated with drastic reduction in the southwesterly winds and moisture transport into the region, weakening of the summer mean meridional circulation and diminished vertical velocities. We show that this is due to larger upper tropospheric warming relative to the surface and lower levels, which decreases the lapse rate causing an increase in vertical moist static stability (which in turn inhibits vertical ascent) in response to global warming. Increased stability that weakens vertical velocities leads to reduction in large-scale precipitation which is found to be the major contributor to summer mean rainfall over WG orographic region. This is further corroborated by a significant decrease in the frequency of moderate-to-heavy rainfall days over WG which is a typical manifestation of the decrease in large-scale precipitation over this region. Thus, the drastic reduction of vertical ascent and weakening of circulation due to `upper tropospheric warming effect' predominates over the `moisture build-up effect' in reducing the rainfall over this narrow orographic region. This analysis illustrates that monsoon rainfall over mountainous regions is strongly controlled by processes and parameterized physics which need to be resolved with adequately high resolution for accurate assessment of local and regional-scale climate change.
Resumo:
A low complexity, essentially-ML decoding technique for the Golden code and the three antenna Perfect code was introduced by Sirianunpiboon, Howard and Calderbank. Though no theoretical analysis of the decoder was given, the simulations showed that this decoding technique has almost maximum-likelihood (ML) performance. Inspired by this technique, in this paper we introduce two new low complexity decoders for Space-Time Block Codes (STBCs)-the Adaptive Conditional Zero-Forcing (ACZF) decoder and the ACZF decoder with successive interference cancellation (ACZF-SIC), which include as a special case the decoding technique of Sirianunpiboon et al. We show that both ACZF and ACZF-SIC decoders are capable of achieving full-diversity, and we give a set of sufficient conditions for an STBC to give full-diversity with these decoders. We then show that the Golden code, the three and four antenna Perfect codes, the three antenna Threaded Algebraic Space-Time code and the four antenna rate 2 code of Srinath and Rajan are all full-diversity ACZF/ACZF-SIC decodable with complexity strictly less than that of their ML decoders. Simulations show that the proposed decoding method performs identical to ML decoding for all these five codes. These STBCs along with the proposed decoding algorithm have the least decoding complexity and best error performance among all known codes for transmit antennas. We further provide a lower bound on the complexity of full-diversity ACZF/ACZF-SIC decoding. All the five codes listed above achieve this lower bound and hence are optimal in terms of minimizing the ACZF/ACZF-SIC decoding complexity. Both ACZF and ACZF-SIC decoders are amenable to sphere decoding implementation.