385 resultados para modeling algorithms
Resumo:
An important question which has to be answered in evaluting the suitability of a microcomputer for a control application is the time it would take to execute the specified control algorithm. In this paper, we present a method of obtaining closed-form formulas to estimate this time. These formulas are applicable to control algorithms in which arithmetic operations and matrix manipulations dominate. The method does not require writing detailed programs for implementing the control algorithm. Using this method, the execution times of a variety of control algorithms on a range of 16-bit mini- and recently announced microcomputers are calculated. The formulas have been verified independently by an analysis program, which computes the execution time bounds of control algorithms coded in Pascal when they are run on a specified micro- or minicomputer.
Resumo:
Hydrologic impacts of climate change are usually assessed by downscaling the General Circulation Model (GCM) output of large-scale climate variables to local-scale hydrologic variables. Such an assessment is characterized by uncertainty resulting from the ensembles of projections generated with multiple GCMs, which is known as intermodel or GCM uncertainty. Ensemble averaging with the assignment of weights to GCMs based on model evaluation is one of the methods to address such uncertainty and is used in the present study for regional-scale impact assessment. GCM outputs of large-scale climate variables are downscaled to subdivisional-scale monsoon rainfall. Weights are assigned to the GCMs on the basis of model performance and model convergence, which are evaluated with the Cumulative Distribution Functions (CDFs) generated from the downscaled GCM output (for both 20th Century [20C3M] and future scenarios) and observed data. Ensemble averaging approach, with the assignment of weights to GCMs, is characterized by the uncertainty caused by partial ignorance, which stems from nonavailability of the outputs of some of the GCMs for a few scenarios (in Intergovernmental Panel on Climate Change [IPCC] data distribution center for Assessment Report 4 [AR4]). This uncertainty is modeled with imprecise probability, i.e., the probability being represented as an interval gray number. Furthermore, the CDF generated with one GCM is entirely different from that with another and therefore the use of multiple GCMs results in a band of CDFs. Representing this band of CDFs with a single valued weighted mean CDF may be misleading. Such a band of CDFs can only be represented with an envelope that contains all the CDFs generated with a number of GCMs. Imprecise CDF represents such an envelope, which not only contains the CDFs generated with all the available GCMs but also to an extent accounts for the uncertainty resulting from the missing GCM output. This concept of imprecise probability is also validated in the present study. The imprecise CDFs of monsoon rainfall are derived for three 30-year time slices, 2020s, 2050s and 2080s, with A1B, A2 and B1 scenarios. The model is demonstrated with the prediction of monsoon rainfall in Orissa meteorological subdivision, which shows a possible decreasing trend in the future.
Resumo:
Solidification processes are complex in nature, involving multiple phases and several length scales. The properties of solidified products are dictated by the microstructure, the mactostructure, and various defects present in the casting. These, in turn, are governed by the multiphase transport phenomena Occurring at different length scales. In order to control and improve the quality of cast products, it is important to have a thorough understanding of various physical and physicochemical phenomena Occurring at various length scales. preferably through predictive models and controlled experiments. In this context, the modeling of transport phenomena during alloy solidification has evolved over the last few decades due to the complex multiscale nature of the problem. Despite this, a model accounting for all the important length scales directly is computationally prohibitive. Thus, in the past, single-phase continuum models have often been employed with respect to a single length scale to model solidification processing. However, continuous development in understanding the physics of solidification at various length scales oil one hand and the phenomenal growth of computational power oil the other have allowed researchers to use increasingly complex multiphase/multiscale models in recent. times. These models have allowed greater understanding of the coupled micro/macro nature of the process and have made it possible to predict solute segregation and microstructure evolution at different length scales. In this paper, a brief overview of the current status of modeling of convection and macrosegregation in alloy solidification processing is presented.
Resumo:
In [8], we recently presented two computationally efficient algorithms named B-RED and P-RED for random early detection. In this letter, we present the mathematical proof of convergence of these algorithms under general conditions to local minima.
Resumo:
Spatial variations in the concentration of a reactive solute in solution are often encountered in a catalyst particle, and this leads to variation in the freezing point of the solution. Depending on the operating temperature, this can result in freezing of the solvent oil a portion of catalyst, rendering that part of the active area ineffective Freezing call occur by formation of a sharp front or it mush that separates the solid and fluid phases. In this paper, we model the extent of reduction in the active area due to freezing. Assuming that the freezing point decreases linearly with solute concentration, conditions for freezing to occur have been derived. At steady state, the ineffective fraction of catalyst pellet is found to be the same irrespective of the mode of freezing. Progress of freezing is determined by both the heat of reaction and the latent heat of fusion Unlike in freezing of alloys where the latter plays a dominant role, the exothermicity of the reaction has a significant effect on freezing in the presence of chemical reactions. A dimensionless group analogous to the Stefan number could be defined to capture the combined effect of both of these.
Resumo:
The network scenario is that of an infrastructure IEEE 802.11 WLAN with a single AP with which several stations (STAs) are associated. The AP has a finite size buffer for storing packets. In this scenario, we consider TCP-controlled upload and download file transfers between the STAs and a server on the wireline LAN (e.g., 100 Mbps Ethernet) to which the AP is connected. In such a situation, it is well known that because of packet losses due to finite buffers at the AP, upload file transfers obtain larger throughputs than download transfers. We provide an analytical model for estimating the upload and download throughputs as a function of the buffer size at the AP. We provide models for the undelayed and delayed ACK cases for a TCP that performs loss recovery only by timeout, and also for TCP Reno. The models are validated incomparison with NS2 simulations.
Resumo:
Ferrous iron bio-oxidation by Acidithiobacillus ferrooxidans immobilized on polyurethane foam was investigated. Cells were immobilized on foams by placing them in a growth environment and fully bacterially activated polyurethane foams (BAPUFs) were prepared by serial subculturing in batches with partially bacterially activated foam (pBAPUFs). The dependence of foam density on cell immobilization process, the effect of pH and BAPUF loading on ferrous oxidation were studied to choose operating parameters for continuous operations. With an objective to have high cell densities both in foam and the liquid phase, pretreated foams of density 50 kg/m3 as cell support and ferrous oxidation at pH 1.5 to moderate the ferric precipitation were preferred. A novel basket-type bioreactor for continuous ferrous iron oxidation, which features a multiple effect of stirred tank in combination with recirculation, was designed and operated. The results were compared with that of a free cell and a sheet-type foam immobilized reactors. A fivefold increase in ferric iron productivity at 33.02 g/h/L of free volume in foam was achieved using basket-type bioreactor when compared to a free cell continuous system. A mathematical model for ferrous iron oxidation by Acidithiobacillus ferrooxidans cells immobilized on polyurethane foam was developed with cell growth in foam accounted by an effectiveness factor. The basic parameters of simulation were estimated using the experimental data on free cell growth as well as from cell attachment to foam under nongrowing conditions. The model predicted the phase of both oxidation of ferrous in shake flasks by pBAPUFs as well as by fully activated BAPUFs for different cell loadings in foam. Model for stirred tank basket bioreactor predicted within 5% both transient and steady state of the experiments closely for the simulated dilution rates. Bio-oxidation at high Fe2+ concentrations were simulated with experiments when substrate and product inhibition coefficients were factored into cell growth kinetics.
Resumo:
We present four new reinforcement learning algorithms based on actor-critic, natural-gradient and functi approximation ideas,and we provide their convergence proofs. Actor-critic reinforcement learning methods are online approximations to policy iteration in which the value-function parameters are estimated using temporal difference learning and the policy parameters are updated by stochastic gradient descent. Methods based on policy gradients in this way are of special interest because of their compatibility with function-approximation methods, which are needed to handle large or infinite state spaces. The use of temporal difference learning in this way is of special interest because in many applications it dramatically reduces the variance of the gradient estimates. The use of the natural gradient is of interest because it can produce better conditioned parameterizations and has been shown to further reduce variance in some cases. Our results extend prior two-timescale convergence results for actor-critic methods by Konda and Tsitsiklis by using temporal difference learning in the actor and by incorporating natural gradients. Our results extend prior empirical studies of natural actor-critic methods by Peters, Vijayakumar and Schaal by providing the first convergence proofs and the first fully incremental algorithms.
Resumo:
A mathematical model is developed to simulate oxygen consumption, heat generation and cell growth in solid state fermentation (SSF). The fungal growth on the solid substrate particles results in the increase of the cell film thickness around the particles. The model incorporates this increase in the biofilm size which leads to decrease in the porosity of the substrate bed and diffusivity of oxygen in the bed. The model also takes into account the effect of steric hindrance limitations in SSF. The growth of cells around single particle and resulting expansion of biofilm around the particle is analyzed for simplified zero and first order oxygen consumption kinetics. Under conditions of zero order kinetics, the model predicts upper limit on cell density. The model simulations for packed bed of solid particles in tray bioreactor show distinct limitations on growth due to simultaneous heat and mass transport phenomena accompanying solid state fermentation process. The extent of limitation due to heat and/or mass transport phenomena is analyzed during different stages of fermentation. It is expected that the model will lead to better understanding of the transport processes in SSF, and therefore, will assist in optimal design of bioreactors for SSF.
Resumo:
Although various strategies have been developed for scheduling parallel applications with independent tasks, very little work exists for scheduling tightly coupled parallel applications on cluster environments. In this paper, we compare four different strategies based on performance models of tightly coupled parallel applications for scheduling the applications on clusters. In addition to algorithms based on existing popular optimization techniques, we also propose a new algorithm called Box Elimination that searches the space of performance model parameters to determine the best schedule of machines. By means of real and simulation experiments, we evaluated the algorithms on single cluster and multi-cluster setups. We show that our Box Elimination algorithm generates up to 80% more efficient schedule than other algorithms. We also show that the execution times of the schedules produced by our algorithm are more robust against the performance modeling errors.
Resumo:
This article analyzes the effect of devising a new failure envelope by the combination of the most commonly used failure criteria for the composite laminates, on the design of composite structures. The failure criteria considered for the study are maximum stress and Tsai-Wu criteria. In addition to these popular phenomenological-based failure criteria, a micromechanics-based failure criterion called failure mechanism-based failure criterion is also considered. The failure envelopes obtained by these failure criteria are superimposed over one another and a new failure envelope is constructed based on the lowest absolute values of the strengths predicted by these failure criteria. Thus, the new failure envelope so obtained is named as most conservative failure envelope. A minimum weight design of composite laminates is performed using genetic algorithms. In addition to this, the effect of stacking sequence on the minimum weight of the laminate is also studied. Results are compared for the different failure envelopes and the conservative design is evaluated, with respect to the designs obtained by using only one failure criteria. The design approach is recommended for structures where composites are the key load-carrying members such as helicopter rotor blades.
Resumo:
A considerable amount of work has been dedicated on the development of analytical solutions for flow of chemical contaminants through soils. Most of the analytical solutions for complex transport problems are closed-form series solutions. The convergence of these solutions depends on the eigen values obtained from a corresponding transcendental equation. Thus, the difficulty in obtaining exact solutions from analytical models encourages the use of numerical solutions for the parameter estimation even though, the later models are computationally expensive. In this paper a combination of two swarm intelligence based algorithms are used for accurate estimation of design transport parameters from the closed-form analytical solutions. Estimation of eigen values from a transcendental equation is treated as a multimodal discontinuous function optimization problem. The eigen values are estimated using an algorithm derived based on glowworm swarm strategy. Parameter estimation of the inverse problem is handled using standard PSO algorithm. Integration of these two algorithms enables an accurate estimation of design parameters using closed-form analytical solutions. The present solver is applied to a real world inverse problem in environmental engineering. The inverse model based on swarm intelligence techniques is validated and the accuracy in parameter estimation is shown. The proposed solver quickly estimates the design parameters with a great precision.
Resumo:
A compact model for noise margin (NM) of single-electron transistor (SET) logic is developed, which is a function of device capacitances and background charge (zeta). Noise margin is, then, used as a metric to evaluate the robustness of SET logic against background charge, temperature, and variation of SET gate and tunnel junction capacitances (CG and CT). It is shown that choosing alpha=CT/CG=1/3 maximizes the NM. An estimate of the maximum tolerable zeta is shown to be equal to plusmn0.03 e. Finally, the effect of mismatch in device parameters on the NM is studied through exhaustive simulations, which indicates that a isin [0.3, 0.4] provides maximum robustness. It is also observed that mismatch can have a significant impact on static power dissipation.
Resumo:
A common trick for designing faster quantum adiabatic algorithms is to apply the adiabaticity condition locally at every instant. However it is often difficult to determine the instantaneous gap between the lowest two eigenvalues, which is an essential ingredient in the adiabaticity condition. In this paper we present a simple linear algebraic technique for obtaining a lower bound on the instantaneous gap even in such a situation. As an illustration, we investigate the adiabatic un-ordered search of van Dam et al. [17] and Roland and Cerf [15] when the non-zero entries of the diagonal final Hamiltonian are perturbed by a polynomial (in log N, where N is the length of the unordered list) amount. We use our technique to derive a bound on the running time of a local adiabatic schedule in terms of the minimum gap between the lowest two eigenvalues.
Resumo:
Magnetorheological dampers are intrinsically nonlinear devices, which make the modeling and design of a suitable control algorithm an interesting and challenging task. To evaluate the potential of magnetorheological (MR) dampers in control applications and to take full advantages of its unique features, a mathematical model to accurately reproduce its dynamic behavior has to be developed and then a proper control strategy has to be taken that is implementable and can fully utilize their capabilities as a semi-active control device. The present paper focuses on both the aspects. First, the paper reports the testing of a magnetorheological damper with an universal testing machine, for a set of frequency, amplitude, and current. A modified Bouc-Wen model considering the amplitude and input current dependence of the damper parameters has been proposed. It has been shown that the damper response can be satisfactorily predicted with this model. Second, a backstepping based nonlinear current monitoring of magnetorheological dampers for semi-active control of structures under earthquakes has been developed. It provides a stable nonlinear magnetorheological damper current monitoring directly based on system feedback such that current change in magnetorheological damper is gradual. Unlike other MR damper control techniques available in literature, the main advantage of the proposed technique lies in its current input prediction directly based on system feedback and smooth update of input current. Furthermore, while developing the proposed semi-active algorithm, the dynamics of the supplied and commanded current to the damper has been considered. The efficiency of the proposed technique has been shown taking a base isolated three story building under a set of seismic excitation. Comparison with widely used clipped-optimal strategy has also been shown.