132 resultados para estimated average requirement
Resumo:
This work is a survey of the average cost control problem for discrete-time Markov processes. The authors have attempted to put together a comprehensive account of the considerable research on this problem over the past three decades. The exposition ranges from finite to Borel state and action spaces and includes a variety of methodologies to find and characterize optimal policies. The authors have included a brief historical perspective of the research efforts in this area and have compiled a substantial yet not exhaustive bibliography. The authors have also identified several important questions that are still open to investigation.
Resumo:
A new feature-based technique is introduced to solve the nonlinear forward problem (FP) of the electrical capacitance tomography with the target application of monitoring the metal fill profile in the lost foam casting process. The new technique is based on combining a linear solution to the FP and a correction factor (CF). The CF is estimated using an artificial neural network (ANN) trained using key features extracted from the metal distribution. The CF adjusts the linear solution of the FP to account for the nonlinear effects caused by the shielding effects of the metal. This approach shows promising results and avoids the curse of dimensionality through the use of features and not the actual metal distribution to train the ANN. The ANN is trained using nine features extracted from the metal distributions as input. The expected sensors readings are generated using ANSYS software. The performance of the ANN for the training and testing data was satisfactory, with an average root-mean-square error equal to 2.2%.
Resumo:
In the Himalayas, a large area is covered by glaciers and seasonal snow and changes in its extent can influence availability of water in the Himalayan Rivers. In this paper, changes in glacial extent, glacial mass balance and seasonal snow cover are discussed. Glacial retreat was estimated for 1868 glaciers in 11 basins distributed in the Indian Himalaya since 1962. The investigation has shown an overall reduction in glacier area from 6332 to 5329km2 from 1962 to 2001/2 - an overall deglaciation of 16%. Snow line at the end of ablation season on the Chhota Shigri glacier observed using field and satellite methods suggests a change in altitude from 4900 to 5200m from the late 1970s to present. Seasonal snow cover was monitored in the 28 river sub-basins using normalized difference snow index (NDSI) technique in Central and Western Himalaya. The investigation has shown that in the early part of winter, i.e. from October to December, a large amount of snow retreat was observed. For many basins located in lower altitude and in the south of the Pir Panjal range, snow ablation was observed throughout the winter season. In addition, average stream runoff of the Baspa basin for the month of December increased by 75%. This combination of glacial retreat, negative mass balance, early melting of seasonal snow cover and winter-time increase in stream runoff might suggest an influence of global warming on the Himalayan cryosphere.
Resumo:
We propose a novel formulation of the points-to analysis as a system of linear equations. With this, the efficiency of the points-to analysis can be significantly improved by leveraging the advances in solution procedures for solving the systems of linear equations. However, such a formulation is non-trivial and becomes challenging due to various facts, namely, multiple pointer indirections, address-of operators and multiple assignments to the same variable. Further, the problem is exacerbated by the need to keep the transformed equations linear. Despite this, we successfully model all the pointer operations. We propose a novel inclusion-based context-sensitive points-to analysis algorithm based on prime factorization, which can model all the pointer operations. Experimental evaluation on SPEC 2000 benchmarks and two large open source programs reveals that our approach is competitive to the state-of-the-art algorithms. With an average memory requirement of mere 21MB, our context-sensitive points-to analysis algorithm analyzes each benchmark in 55 seconds on an average.
Resumo:
Large external memory bandwidth requirement leads to increased system power dissipation and cost in video coding application. Majority of the external memory traffic in video encoder is due to reference data accesses. We describe a lossy reference frame compression technique that can be used in video coding with minimal impact on quality while significantly reducing power and bandwidth requirement. The low cost transformless compression technique uses lossy reference for motion estimation to reduce memory traffic, and lossless reference for motion compensation (MC) to avoid drift. Thus, it is compatible with all existing video standards. We calculate the quantization error bound and show that by storing quantization error separately, bandwidth overhead due to MC can be reduced significantly. The technique meets key requirements specific to the video encode application. 24-39% reduction in peak bandwidth and 23-31% reduction in total average power consumption are observed for IBBP sequences.
Resumo:
Statistically averaged lattices provide a common basis to understand the diffraction properties of structures displaying deviations from regular crystal structures. An average lattice is defined and examples are given in one and two dimensions along with their diffraction patterns. The absence of periodicity in reciprocal space corresponding to aperiodic structures is shown to arise out of different projected spacings that are irrationally related, when the grid points are projected along the chosen coordinate axes. It is shown that the projected length scales are important factors which determine the existence or absence of observable periodicity in the diffraction pattern more than the sequence of arrangement.
Resumo:
The problem of estimating the time-dependent statistical characteristics of a random dynamical system is studied under two different settings. In the first, the system dynamics is governed by a differential equation parameterized by a random parameter, while in the second, this is governed by a differential equation with an underlying parameter sequence characterized by a continuous time Markov chain. We propose, for the first time in the literature, stochastic approximation algorithms for estimating various time-dependent process characteristics of the system. In particular, we provide efficient estimators for quantities such as the mean, variance and distribution of the process at any given time as well as the joint distribution and the autocorrelation coefficient at different times. A novel aspect of our approach is that we assume that information on the parameter model (i.e., its distribution in the first case and transition probabilities of the Markov chain in the second) is not available in either case. This is unlike most other work in the literature that assumes availability of such information. Also, most of the prior work in the literature is geared towards analyzing the steady-state system behavior of the random dynamical system while our focus is on analyzing the time-dependent statistical characteristics which are in general difficult to obtain. We prove the almost sure convergence of our stochastic approximation scheme in each case to the true value of the quantity being estimated. We provide a general class of strongly consistent estimators for the aforementioned statistical quantities with regular sample average estimators being a specific instance of these. We also present an application of the proposed scheme on a widely used model in population biology. Numerical experiments in this framework show that the time-dependent process characteristics as obtained using our algorithm in each case exhibit excellent agreement with exact results. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
A swarm is a temporary structure formed when several thousand honey bees leave their hive and settle on some object such as the branch of a tree. They remain in this position until a suitable site for a new home is located by the scout bees. A continuum model based on heat conduction and heat generation is used to predict temperature profiles in swarms. Since internal convection is neglected, the model is applicable only at low values of the ambient temperature T-a. Guided by the experimental observations of Heinrich (1981a-c, J. Exp. Biol. 91, 25-55; Science 212, 565-566; Sci. Am. 244, 147-160), the analysis is carried out mainly for non-spherical swarms. The effective thermal conductivity is estimated using the data of Heinrich (1981a, J. Exp. Biol. 91, 25-55) for dead bees. For T-a = 5 and 9 degrees C, results based on a modified version of the heat generation function due to Southwick (1991, The Behaviour and Physiology of Bees, PP 28-47. C.A.B. International, London) are in reasonable agreement with measurements. Results obtained with the heat generation function of Myerscough (1993, J. Theor. Biol. 162, 381-393) are qualitatively similar to those obtained with Southwick's function, but the error is more in the former case. The results suggest that the bees near the periphery generate more heat than those near the core, in accord with the conjecture of Heinrich (1981c, Sci. Am. 244, 147-160). On the other hand, for T-a = 5 degrees C, the heat generation function of Omholt and Lonvik (1986, J. Theor. Biol. 120, 447-456) leads to a trivial steady state where the entire swarm is at the ambient temperature. Therefore an acceptable heat generation function must result in a steady state which is both non-trivial and stable with respect to small perturbations. Omholt and Lonvik's function satisfies the first requirement, but not the second. For T-a = 15 degrees C, there is a considerable difference between predicted and measured values, probably due to the neglect of internal convection in the model.
Resumo:
The protective ability of cytotoxic T cells (CTL) raised in vitro against Japanese encephalitis virus (JEV) was examined by adoptive transfer experiments. Adoptive transfer of anti-JEV effecters by intracerebral (i.c.) but not by intraperitoneal (i.p.) or intravenous (i.v.) routes protected adult BALB/c mice against lethal i.c. JEV challenge. In contrast to adult mice, adoptive transfer of anti-JEV effecters into newborn (4-day-old) and suckling (8-14-day-old) mice did not confer protection. However, virus-induced death was delayed in suckling mice compared to newborn mice upon adoptive transfer. The specific reasons for lack of protection in newborn mice are not clear but virus load was found to be higher in newborn mice brains compared to those of adults and virus clearance was observed only in adult mice brains but not in newborn mice brains upon adoptive transfer. Specific depletion of Lyt 2.2(+), L3T4(+) or Thy-1(+) T cell populations before adoptive transfer abrogated the protective ability of transferred effecters. However, when Lyt 2.2(+) cell-depleted and L3T4(+) cell-depleted effecters were mixed and transferred into adult mice the protective activity was retained, demonstrating that both Lyt 2.2(+) and L3T4(+) T cells are necessary to confer protection. Although the presence of L3T4(+) T cells in adoptively transferred effector populations enhanced virus-specific serum neutralizing antibodies, the presence of neutralizing antibodies alone without Lyt 2.2(+) cells was not sufficient to confer protection.
Resumo:
The subsurface microhardness mapping technique of Chaudhri was utilized to determine the shape, size and distribution of plastic strain underneath conical indenters of varying semi-apex angles, alpha (55 degrees, 65 degrees and 75 degrees). Results show that the elastic-plastic boundary under the indenters is elliptical in nature, contradicting the expanding cavity model, and the ellipticity increases with alpha. The maximum plastic strain immediately under the indenter was found to decrease with increasing alpha. Complementary finite-element analysis was conducted to examine the ability of simulations to capture the experimental observations. A comparison of computational and experimental results indicates that the plastic strain distributions as well as the maximum strains immediately beneath the indenter do not match, suggesting that simulation of sharp indentation requires further detailed studies for complete comprehension. Representative strains, epsilon(r), evaluated as the volume-average strains within the elastic-plastic boundary, decrease with increasing alpha and are in agreement with those estimated by using the dimensional analysis. (C) 2011 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Resumo:
Simulation is an important means of evaluating new microarchitectures. With the invention of multi-core (CMP) platforms, simulators are becoming larger and more complex. However, with the availability of CMPs with larger caches and higher operating frequency, the wall clock time required for simulating an application has become comparatively shorter. Reducing this simulation time further is a great challenge, especially in the case of multi-threaded workload due to indeterminacy introduced due to simultaneously executing various threads. In this paper, we propose a technique for speeding multi-core simulation. The model of the processor core and cache are replaced with functional models, to achieve speedup. A timed Petri net model is used to estimate the execution time of the processor and the memory access latencies are estimated using hit/miss information obtained from the functional model of the cache. This model can be used to predict performance of data parallel applications or multiprogramming workload on CMP platform with various cache hierarchies and shared bus interconnect. The error in estimation of the execution time of an application is within 6%. The speedup achieved ranges between an average of 2x--4x over the cycle accurate simulator.
Resumo:
Large-grain synchronous dataflow graphs or multi-rate graphs have the distinct feature that the nodes of the dataflow graph fire at different rates. Such multi-rate large-grain dataflow graphs have been widely regarded as a powerful programming model for DSP applications. In this paper we propose a method to minimize buffer storage requirement in constructing rate-optimal compile-time (MBRO) schedules for multi-rate dataflow graphs. We demonstrate that the constraints to minimize buffer storage while executing at the optimal computation rate (i.e. the maximum possible computation rate without storage constraints) can be formulated as a unified linear programming problem in our framework. A novel feature of our method is that in constructing the rate-optimal schedule, it directly minimizes the memory requirement by choosing the schedule time of nodes appropriately. Lastly, a new circular-arc interval graph coloring algorithm has been proposed to further reduce the memory requirement by allowing buffer sharing among the arcs of the multi-rate dataflow graph. We have constructed an experimental testbed which implements our MBRO scheduling algorithm as well as (i) the widely used periodic admissible parallel schedules (also known as block schedules) proposed by Lee and Messerschmitt (IEEE Transactions on Computers, vol. 36, no. 1, 1987, pp. 24-35), (ii) the optimal scheduling buffer allocation (OSBA) algorithm of Ning and Gao (Conference Record of the Twentieth Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, Charleston, SC, Jan. 10-13, 1993, pp. 29-42), and (iii) the multi-rate software pipelining (MRSP) algorithm (Govindarajan and Gao, in Proceedings of the 1993 International Conference on Application Specific Array Processors, Venice, Italy, Oct. 25-27, 1993, pp. 77-88). Schedules generated for a number of random dataflow graphs and for a set of DSP application programs using the different scheduling methods are compared. The experimental results have demonstrated a significant improvement (10-20%) in buffer requirements for the MBRO schedules compared to the schedules generated by the other three methods, without sacrificing the computation rate. The MBRO method also gives a 20% average improvement in computation rate compared to Lee's Block scheduling method.
Resumo:
Isothermal sections of the phase diagrams for the systems Ln-Pd-O (Ln = lanthanide element) at 1223 K indicate the presence of two inter-oxide compounds Ln(4)PdO(7) and Ln(2)Pd(2)O(5) for Ln = La, Pr, Nd, Sm, three compounds Ln(4)PdO(7), Ln(2)PdO(4) and Ln(2)Pd(2)O(5) for Ln = Eu, Gd and only one compound of Ln(2)Pd(2)O(5) for Ln = Tb to Ho. The lattice parameters of the compounds Ln(4)PdO(7), Ln(2)PdO(4) and Ln(2)Pd(2)O(5) show systematic nonlinear variation with atomic number. The unit cell volumes decrease with increasing atomic number. The standard Gibbs energies, enthalpies and entropies of formation of the ternary oxides from their component binary oxides (Ln(2)O(3) and PdO) have been measured recently using an advanced version of the solid-state electrochemical cell. The Gibbs energies and enthalpies of formation become less negative with increasing atomic number of Ln. For all the three compounds, the variation in Gibbs energy and enthalpy of formation with atomic number is markedly non-linear. The decrease in stability with atomic number is most pronounced for Ln(2)Pd(2)O(5), followed by Ln(4)PdO(7) and Ln(2)PdO(4). This is probably related to the repulsion between Pd2+ ions on the opposite phases Of O-8 cubes in Ln(2)Pd(2)O(5), and the presence of Ln-filled O-8 cubes that share three faces with each other in Ln4PdO7. The values for entropy of formation of all the ternary oxides from their component binary oxides are relatively small. Although the entropies of formation show some scatter, the average value for Ln = La, Pr, Nd is more negative than the average value for the other lanthanide elements. From this difference, an average value for the structure transformation entropy of Ln(2)O(3) from C-type to A-type is estimated as 0.87 J.mol(-1).K-1. The standard Gibbs energies of formation of these ternary oxides from elements at 1223 K are presented as a function of lanthanide atomic number. By invoking the Neumann-Kopp rule for heat capacity, thermodynamic properties of the inter-oxide compounds at 298.15 K are estimated. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Conventional thyristor-based load commutated inverter (LCI)-fed wound field synchronous machine operates only above a minimum speed that is necessary to develop enough back emf to ensure commutation. The drive is started and brought up to a speed of around 10-15% by a complex `dc link current pulsing' technique. During this process, the drive have problems such as pulsating torque, insufficient average starting torque, longer starting time, etc. In this regard a simple starting and low-speed operation scheme, by employing an auxiliary low-power voltage source inverter (VSI) between the LCI and the machine terminals, is presented in this study. The drive is started and brought up to a low speed of around 15% using the VSI alone with field oriented control. The complete control is then smoothly and dynamically transferred to the conventional LCI control. After the control transfer, the VSI is turned off and physically disconnected from the main circuit. The advantages of this scheme are smooth starting, complete control of torque and flux at starting and low speeds, less starting time, stable operation, etc. The voltage rating of the required VSI is very low of the order of 10-15%, whereas the current rating is dependent on the starting torque requirement of the load. The experimental results from a 15.8 hp LCI-fed wound field synchronous machine are given to demonstrate the scheme.