956 resultados para Computationally efficient
Resumo:
A computationally efficient Li-ion battery model has been proposed in this paper. The battery model utilizes the features of both analytical and electrical circuit modeling techniques. The model is simple as it does not involve a look-up table technique and fast as it does not include a polynomial function during computation. The internal voltage of the battery is modeled as a linear function of the state-of-charge of the battery. The internal resistance is experimentally determined and the optimal value of resistance is considered for modeling. Experimental and simulated data are compared to validate the accuracy of the model.
Resumo:
This paper presents a computationally efficient model for a dc-dc boost converter, which is valid for continuous and discontinuous conduction modes; the model also incorporates significant non-idealities of the converter. Simulation of the dc-dc boost converter using an average model provides practically all the details, which are available from the simulation using the switching (instantaneous) model, except for the quantum of ripple in currents and voltages. A harmonic model of the converter can be used to evaluate the ripple quantities. This paper proposes a combined (average-cum-harmonic) model of the boost converter. The accuracy of the combined model is validated through extensive simulations and experiments. A quantitative comparison of the computation times of the average, combined and switching models are presented. The combined model is shown to be more computationally efficient than the switching model for simulation of transient and steady-state responses of the converter under various conditions.
Resumo:
This paper discusses dynamic modeling of non-isolated DC-DC converters (buck, boost and buck-boost) under continuous and discontinuous modes of operation. Three types of models are presented for each converter, namely, switching model, average model and harmonic model. These models include significant non-idealities of the converters. The switching model gives the instantaneous currents and voltages of the converter. The average model provides the ripple-free currents and voltages, averaged over a switching cycle. The harmonic model gives the peak to peak values of ripple in currents and voltages. The validity of all these models is established by comparing the simulation results with the experimental results from laboratory prototypes, at different steady state and transient conditions. Simulation based on a combination of average and harmonic models is shown to provide all relevant information as obtained from the switching model, while consuming less computation time than the latter.
A computationally efficient software application for calculating vibration from underground railways
A computationally efficient software application for calculating vibration from underground railways
Resumo:
The PiP model is a software application with a user-friendly interface for calculating vibration from underground railways. This paper reports about the software with a focus on its latest version and the plans for future developments. The software calculates the Power Spectral Density of vibration due to a moving train on floating-slab track with track irregularity described by typical values of spectra for tracks with good, average and bad conditions. The latest version accounts for a tunnel embedded in a half space by employing a toolbox developed at K.U. Leuven which calculates Green's functions for a multi-layered half-space. © 2009 IOP Publishing Ltd.
Resumo:
Computionally efficient sequential learning algorithms are developed for direct-link resource-allocating networks (DRANs). These are achieved by decomposing existing recursive training algorithms on a layer by layer and neuron by neuron basis. This allows network weights to be updated in an efficient parallel manner and facilitates the implementation of minimal update extensions that yield a significant reduction in computation load per iteration compared to existing sequential learning methods employed in resource-allocation network (RAN) and minimal RAN (MRAN) approaches. The new algorithms, which also incorporate a pruning strategy to control network growth, are evaluated on three different system identification benchmark problems and shown to outperform existing methods both in terms of training error convergence and computational efficiency. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
This paper describes an MPEG (moving pictures expert group) audio layer II - LFE (lower frequency extension) bit-stream processor targeting DAB (digital audio broadcasting) receivers that will handle the decoding of the frames in a computationally efficient manner to provide a synthesis sub-band filter with the reconstructed sub-band samples. Focus is given to the frequency sample reconstruction part, which handles the re-quantization and re-scaling of the samples once the necessary information is extracted from the frame. The comparison to a direct implementation of the frequency sample reconstruction block is carried out to prove increased computational efficiency.
Resumo:
We propose a novel, simple, efficient and distribution-free re-sampling technique for developing prediction intervals for returns and volatilities following ARCH/GARCH models. In particular, our key idea is to employ a Box–Jenkins linear representation of an ARCH/GARCH equation and then to adapt a sieve bootstrap procedure to the nonlinear GARCH framework. Our simulation studies indicate that the new re-sampling method provides sharp and well calibrated prediction intervals for both returns and volatilities while reducing computational costs by up to 100 times, compared to other available re-sampling techniques for ARCH/GARCH models. The proposed procedure is illustrated by an application to Yen/U.S. dollar daily exchange rate data.
Resumo:
A multiple factor parametrization is described to permit the efficient calculation of collision efficiency (E) between electrically charged aerosol particles and neutral cloud droplets in numerical models of cloud and climate. The four-parameter representation summarizes the results obtained from a detailed microphysical model of E, which accounts for the different forces acting on the aerosol in the path of falling cloud droplets. The parametrization's range of validity is for aerosol particle radii of 0.4 to 10 mu m, aerosol particle densities of I to 2.0 g cm(-3), aerosol particle charges from neutral to 100 elementary charges and drop radii from 18.55 to 142 mu m. The parametrization yields values of E well within an order of magnitude of the detailed model's values, from a dataset of 3978 E values. Of these values 95% have modelled to parametrized ratios between 0.5 and 1.5 for aerosol particle sizes ranging between 0.4 and 2.0 mu m, and about 96% in the second size range. This parametrization speeds up the calculation of E by a factor of similar to 10(3) compared with the original microphysical model, permitting the inclusion of electric charge effects in numerical cloud and climate models.
Resumo:
Top Down Induction of Decision Trees (TDIDT) is the most commonly used method of constructing a model from a dataset in the form of classification rules to classify previously unseen data. Alternative algorithms have been developed such as the Prism algorithm. Prism constructs modular rules which produce qualitatively better rules than rules induced by TDIDT. However, along with the increasing size of databases, many existing rule learning algorithms have proved to be computational expensive on large datasets. To tackle the problem of scalability, parallel classification rule induction algorithms have been introduced. As TDIDT is the most popular classifier, even though there are strongly competitive alternative algorithms, most parallel approaches to inducing classification rules are based on TDIDT. In this paper we describe work on a distributed classifier that induces classification rules in a parallel manner based on Prism.
Resumo:
Induction of classification rules is one of the most important technologies in data mining. Most of the work in this field has concentrated on the Top Down Induction of Decision Trees (TDIDT) approach. However, alternative approaches have been developed such as the Prism algorithm for inducing modular rules. Prism often produces qualitatively better rules than TDIDT but suffers from higher computational requirements. We investigate approaches that have been developed to minimize the computational requirements of TDIDT, in order to find analogous approaches that could reduce the computational requirements of Prism.