69 resultados para Mathematical Computations
em Indian Institute of Science - Bangalore - Índia
Resumo:
In today's API-rich world, programmer productivity depends heavily on the programmer's ability to discover the required APIs. In this paper, we present a technique and tool, called MATHFINDER, to discover APIs for mathematical computations by mining unit tests of API methods. Given a math expression, MATHFINDER synthesizes pseudo-code to compute the expression by mapping its subexpressions to API method calls. For each subexpression, MATHFINDER searches for a method such that there is a mapping between method inputs and variables of the subexpression. The subexpression, when evaluated on the test inputs of the method under this mapping, should produce results that match the method output on a large number of tests. We implemented MATHFINDER as an Eclipse plugin for discovery of third-party Java APIs and performed a user study to evaluate its effectiveness. In the study, the use of MATHFINDER resulted in a 2x improvement in programmer productivity. In 96% of the subexpressions queried for in the study, MATHFINDER retrieved the desired API methods as the top-most result. The top-most pseudo-code snippet to implement the entire expression was correct in 93% of the cases. Since the number of methods and unit tests to mine could be large in practice, we also implement MATHFINDER in a MapReduce framework and evaluate its scalability and response time.
Resumo:
Today's programming languages are supported by powerful third-party APIs. For a given application domain, it is common to have many competing APIs that provide similar functionality. Programmer productivity therefore depends heavily on the programmer's ability to discover suitable APIs both during an initial coding phase, as well as during software maintenance. The aim of this work is to support the discovery and migration of math APIs. Math APIs are at the heart of many application domains ranging from machine learning to scientific computations. Our approach, called MATHFINDER, combines executable specifications of mathematical computations with unit tests (operational specifications) of API methods. Given a math expression, MATHFINDER synthesizes pseudo-code comprised of API methods to compute the expression by mining unit tests of the API methods. We present a sequential version of our unit test mining algorithm and also design a more scalable data-parallel version. We perform extensive evaluation of MATHFINDER (1) for API discovery, where math algorithms are to be implemented from scratch and (2) for API migration, where client programs utilizing a math API are to be migrated to another API. We evaluated the precision and recall of MATHFINDER on a diverse collection of math expressions, culled from algorithms used in a wide range of application areas such as control systems and structural dynamics. In a user study to evaluate the productivity gains obtained by using MATHFINDER for API discovery, the programmers who used MATHFINDER finished their programming tasks twice as fast as their counterparts who used the usual techniques like web and code search, IDE code completion, and manual inspection of library documentation. For the problem of API migration, as a case study, we used MATHFINDER to migrate Weka, a popular machine learning library. Overall, our evaluation shows that MATHFINDER is easy to use, provides highly precise results across several math APIs and application domains even with a small number of unit tests per method, and scales to large collections of unit tests.
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.
Resumo:
A computational study for the convergence acceleration of Euler and Navier-Stokes computations with upwind schemes has been conducted in a unified framework. It involves the flux-vector splitting algorithms due to Steger-Warming and Van Leer, the flux-difference splitting algorithms due to Roe and Osher and the hybrid algorithms, AUSM (Advection Upstream Splitting Method) and HUS (Hybrid Upwind Splitting). Implicit time integration with line Gauss-Seidel relaxation and multigrid are among the procedures which have been systematically investigated on an individual as well as cumulative basis. The upwind schemes have been tested in various implicit-explicit operator combinations such that the optimal among them can be determined based on extensive computations for two-dimensional flows in subsonic, transonic, supersonic and hypersonic flow regimes. In this study, the performance of these implicit time-integration procedures has been systematically compared with those corresponding to a multigrid accelerated explicit Runge-Kutta method. It has been demonstrated that a multigrid method employed in conjunction with an implicit time-integration scheme yields distinctly superior convergence as compared to those associated with either of the acceleration procedures provided that effective smoothers, which have been identified in this investigation, are prescribed in the implicit operator.
Resumo:
Discharge periods of lead-acid batteries are significantly reduced at subzero centigrade temperatures. The reduction is more than what can he expected due to decreased rates of various processes caused by a lowering of temperature and occurs despite the fact that active materials are available for discharge. It is proposed that the major cause for this is the freezing of the electrolyte. The concentration of acid decreases during battery discharge with a consequent increase in the freezing temperature. A battery freezes when the discharge temperature falls below the freezing temperature. A mathematical model is developed for conditions where charge-transfer reaction is the rate-limiting step. and Tafel kinetics are applicable. It is argued that freezing begins from the midplanes of electrodes and proceeds toward the reservoir in-between. Ionic conduction stops when one of the electrodes freezes fully and the time taken to reach that point, namely the discharge period, is calculated. The predictions of the model compare well to observations made at low current density (C/5) and at -20 and -40 degrees C. At higher current densities, however, diffusional resistances become important and a more complicated moving boundary problem needs to be solved to predict the discharge periods. (C) 2009 The Electrochemical Society.
Resumo:
A fuzzy logic based centralized control algorithm for irrigation canals is presented. Purpose of the algorithm is to control downstream discharge and water level of pools in the canal, by adjusting discharge release from the upstream end and gates settings. The algorithm is based on the dynamic wave model (Saint-Venant equations) inversion in space, wherein the momentum equation is replaced by a fuzzy rule based model, while retaining the continuity equation in its complete form. The fuzzy rule based model is developed on fuzzification of a new mathematical model for wave velocity, the derivational details of which are given. The advantages of the fuzzy control algorithm, over other conventional control algorithms, are described. It is transparent and intuitive, and no linearizations of the governing equations are involved. Timing of the algorithm and method of computation are explained. It is shown that the tuning is easy and the computations are straightforward. The algorithm provides stable, realistic and robust outputs. The disadvantage of the algorithm is reduced precision in its outputs due to the approximation inherent in the fuzzy logic. Feed back control logic is adopted to eliminate error caused by the system disturbances as well as error caused by the reduced precision in the outputs. The algorithm is tested by applying it to water level control problem in a fictitious canal with a single pool and also in a real canal with a series of pools. It is found that results obtained from the algorithm are comparable to those obtained from conventional control algorithms.
Resumo:
A mathematical model for pulsatile flow in a partially occluded tube is presented. The problem has applications in studying the effects of blood flow characteristics on atherosclerotic development. The model brings out the importance of the pulsatility of blood flow on separation and the stress distribution. The results obtained show fairly good agreement with the available experimental results.
Resumo:
Plywood manufacture includes two fundamental stages. The first is to peel or separate logs into veneer sheets of different thicknesses. The second is to assemble veneer sheets into finished plywood products. At the first stage a decision must be made as to the number of different veneer thicknesses to be peeled and what these thicknesses should be. At the second stage, choices must be made as to how these veneers will be assembled into final products to meet certain constraints while minimizing wood loss. These decisions present a fundamental management dilemma. Costs of peeling, drying, storage, handling, etc. can be reduced by decreasing the number of veneer thicknesses peeled. However, a reduced set of thickness options may make it infeasible to produce the variety of products demanded by the market or increase wood loss by requiring less efficient selection of thicknesses for assembly. In this paper the joint problem of veneer choice and plywood construction is formulated as a nonlinear integer programming problem. A relatively simple optimal solution procedure is developed that exploits special problem structure. This procedure is examined on data from a British Columbia plywood mill. Restricted to the existing set of veneer thicknesses and plywood designs used by that mill, the procedure generated a solution that reduced wood loss by 79 percent, thereby increasing net revenue by 6.86 percent. Additional experiments were performed that examined the consequences of changing the number of veneer thicknesses used. Extensions are discussed that permit the consideration of more than one wood species.
Resumo:
In this paper the kinematics of a curved shock of arbitrary strength has been discussed using the theory of generalised functions. This is the extension of Moslov’s work where he has considered isentropic flow even across the shock. The condition for a nontrivial jump in the flow variables gives the shock manifold equation (sme). An equation for the rate of change of shock strength along the shock rays (defined as the characteristics of the sme) has been obtained. This exact result is then compared with the approximate result of shock dynamics derived by Whitham. The comparison shows that the approximate equations of shock dynamics deviate considerably from the exact equations derived here. In the last section we have derived the conservation form of our shock dynamic equations. These conservation forms would be very useful in numerical computations as it would allow us to derive difference schemes for which it would not be necessary to fit the shock-shock explicitly.
Resumo:
Closed-form solutions are presented for approximate equations governing the pulsatile flow of blood through models of mild axisymmetric arterial stenosis, taking into account the effect of arterial distensibility. Results indicate the existence of back-flow regions and the phenomenon of flow-reversal in the cross-sections. The effects of pulsatility of flow and elasticity of vessel wall for arterial blood flow through stenosed vessels are determined.
Resumo:
This paper presents a comparative population dynamics study of three closely related species of buttercups (Ranunculus repens, R. acris, and R. bulbosus). The study is based on an investigation of the behaviour of the seeds in soil under field conditions and a continuous monitoring of survival and reproduction of some 9000 individual plants over a period of 21/2 years in a coastal grassland in North Wales. The data were analysed with the help of an extension of Leslie's matrix method which makes possible an simultaneous treatment of vegetative and sexual reproduction. It was found that R. repens (a) depends more heavily on vegetative as compared with sexual reproduction, (b) shows indications of negatively density-dependent population regulation, and (c) exhibits little variation in population growth rates from site to site and from one year to the next. In contrast, R. bulbosus (a) depends exclusively on sexual reproduction, (b) shows indications of a positively density-dependent population behaviour, and (c) exhibits great variation in population growth rates from site to site and from one year to the next. R. acris exhibits an intermediate behaviour in all these respects. It is suggested that the attributes of R. repens are those expected of a species inhabiting a stable environment, while R. bulbosus exhibits some of the characteristics of a fugitive species.
Resumo:
The solution of the steady laminar incompressible nonsimilar magneto-hydrodynamic boundary layer flow and heat transfer problem with viscous dissipation for electrically conducting fluids over two-dimensional and axisymmetric bodies with pressure gradient and magnetic field has been presented. The partial differential equations governing the flow have been solved numerically using an implicit finite-difference scheme. The computations have been carried out for flow over a cylinder and a sphere. The results indicate that the magnetic field tends to delay or prevent separation. The heat transfer strongly depends on the viscous dissipation parameter. When the dissipation parameter is positive (i.e. when the temperature of the wall is greater than the freestream temperature) and exceeds a certain value, the hot wall ceases to be cooled by the stream of cooler air because the ‘heat cushion’ provided by the frictional heat prevents cooling whereas the effect of the magnetic field is to remove the ‘heat cushion’ so that the wall continues to be cooled. The results are found to be in good agreement with those of the local similarity and local nonsimilarity methods except near the point of separation, but they are in excellent agreement with those of the difference-differential technique even near the point of separation.
Resumo:
In cases whazo zotatLon of the seoondazy pztncipal 8tzo,ae axes along tha light path ,exists, it is always poaeible to detezmlna two dizactions along which plane-polazlaad light ,antazlng the model ,amerCe8 as plene-pela~l,aed light fzom the model. Puzth,az the nat zstazdatton Pot any light path is dlff,azant Prom the lntsgtatad zetazd,ation Pat the l£ght path nogZsctlng the ePfsct or z,atation.
Resumo:
In this paper, pattern classification problem in tool wear monitoring is solved using nature inspired techniques such as Genetic Programming(GP) and Ant-Miner (AM). The main advantage of GP and AM is their ability to learn the underlying data relationships and express them in the form of mathematical equation or simple rules. The extraction of knowledge from the training data set using GP and AM are in the form of Genetic Programming Classifier Expression (GPCE) and rules respectively. The GPCE and AM extracted rules are then applied to set of data in the testing/validation set to obtain the classification accuracy. A major attraction in GP evolved GPCE and AM based classification is the possibility of obtaining an expert system like rules that can be directly applied subsequently by the user in his/her application. The performance of the data classification using GP and AM is as good as the classification accuracy obtained in the earlier study.
Resumo:
Regular electrical activation waves in cardiac tissue lead to the rhythmic contraction and expansion of the heart that ensures blood supply to the whole body. Irregularities in the propagation of these activation waves can result in cardiac arrhythmias, like ventricular tachycardia (VT) and ventricular fibrillation (VF), which are major causes of death in the industrialised world. Indeed there is growing consensus that spiral or scroll waves of electrical activation in cardiac tissue are associated with VT, whereas, when these waves break to yield spiral- or scroll-wave turbulence, VT develops into life-threatening VF: in the absence of medical intervention, this makes the heart incapable of pumping blood and a patient dies in roughly two-and-a-half minutes after the initiation of VF. Thus studies of spiral- and scroll-wave dynamics in cardiac tissue pose important challenges for in vivo and in vitro experimental studies and for in silico numerical studies of mathematical models for cardiac tissue. A major goal here is to develop low-amplitude defibrillation schemes for the elimination of VT and VF, especially in the presence of inhomogeneities that occur commonly in cardiac tissue. We present a detailed and systematic study of spiral- and scroll-wave turbulence and spatiotemporal chaos in four mathematical models for cardiac tissue, namely, the Panfilov, Luo-Rudy phase 1 (LRI), reduced Priebe-Beuckelmann (RPB) models, and the model of ten Tusscher, Noble, Noble, and Panfilov (TNNP). In particular, we use extensive numerical simulations to elucidate the interaction of spiral and scroll waves in these models with conduction and ionic inhomogeneities; we also examine the suppression of spiral- and scroll-wave turbulence by low-amplitude control pulses. Our central qualitative result is that, in all these models, the dynamics of such spiral waves depends very sensitively on such inhomogeneities. We also study two types of control chemes that have been suggested for the control of spiral turbulence, via low amplitude current pulses, in such mathematical models for cardiac tissue; our investigations here are designed to examine the efficacy of such control schemes in the presence of inhomogeneities. We find that a local pulsing scheme does not suppress spiral turbulence in the presence of inhomogeneities; but a scheme that uses control pulses on a spatially extended mesh is more successful in the elimination of spiral turbulence. We discuss the theoretical and experimental implications of our study that have a direct bearing on defibrillation, the control of life-threatening cardiac arrhythmias such as ventricular fibrillation.