61 resultados para Mathematical Programs

em Indian Institute of Science - Bangalore - Índia


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivated by certain situations in manufacturing systems and communication networks, we look into the problem of maximizing the profit in a queueing system with linear reward and cost structure and having a choice of selecting the streams of Poisson arrivals according to an independent Markov chain. We view the system as a MMPP/GI/1 queue and seek to maximize the profits by optimally choosing the stationary probabilities of the modulating Markov chain. We consider two formulations of the optimization problem. The first one (which we call the PUT problem) seeks to maximize the profit per unit time whereas the second one considers the maximization of the profit per accepted customer (the PAC problem). In each of these formulations, we explore three separate problems. In the first one, the constraints come from bounding the utilization of an infinite capacity server; in the second one the constraints arise from bounding the mean queue length of the same queue; and in the third one the finite capacity of the buffer reflect as a set of constraints. In the problems bounding the utilization factor of the queue, the solutions are given by essentially linear programs, while the problems with mean queue length constraints are linear programs if the service is exponentially distributed. The problems modeling the finite capacity queue are non-convex programs for which global maxima can be found. There is a rich relationship between the solutions of the PUT and PAC problems. In particular, the PUT solutions always make the server work at a utilization factor that is no less than that of the PAC solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Discharge periods of lead-acid batteries are significantly reduced at subzero centigrade temperatures. The reduction is more than what can he expected due to decreased rates of various processes caused by a lowering of temperature and occurs despite the fact that active materials are available for discharge. It is proposed that the major cause for this is the freezing of the electrolyte. The concentration of acid decreases during battery discharge with a consequent increase in the freezing temperature. A battery freezes when the discharge temperature falls below the freezing temperature. A mathematical model is developed for conditions where charge-transfer reaction is the rate-limiting step. and Tafel kinetics are applicable. It is argued that freezing begins from the midplanes of electrodes and proceeds toward the reservoir in-between. Ionic conduction stops when one of the electrodes freezes fully and the time taken to reach that point, namely the discharge period, is calculated. The predictions of the model compare well to observations made at low current density (C/5) and at -20 and -40 degrees C. At higher current densities, however, diffusional resistances become important and a more complicated moving boundary problem needs to be solved to predict the discharge periods. (C) 2009 The Electrochemical Society.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research in software science has so far been concentrated on three measures of program complexity: (a) software effort; (b) cyclomatic complexity; and (c) program knots. In this paper we propose a measure of the logical complexity of programs in terms of the variable dependency of sequence of computations, inductive effort in writing loops and complexity of data structures. The proposed complexity mensure is described with the aid of a graph which exhibits diagrammatically the dependence of a computation at a node upon the computation of other (earlier) nodes. Complexity measures of several example programs have been computed and the related issues have been discussed. The paper also describes the role played by data structures in deciding the program complexity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A mathematical model for pulsatile flow in a partially occluded tube is presented. The problem has applications in studying the effects of blood flow characteristics on atherosclerotic development. The model brings out the importance of the pulsatility of blood flow on separation and the stress distribution. The results obtained show fairly good agreement with the available experimental results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The StreamIt programming model has been proposed to exploit parallelism in streaming applications oil general purpose multicore architectures. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on accelerators such as Graphics Processing Units (GPUs) or CellBE which support abundant parallelism in hardware. In this paper, we describe a novel method to orchestrate the execution of if StreamIt program oil a multicore platform equipped with an accelerator. The proposed approach identifies, using profiling, the relative benefits of executing a task oil the superscalar CPU cores and the accelerator. We formulate the problem of partitioning the work between the CPU cores and the GPU, taking into account the latencies for data transfers and the required buffer layout transformations associated with the partitioning, as all integrated Integer Linear Program (ILP) which can then be solved by an ILP solver. We also propose an efficient heuristic algorithm for the work-partitioning between the CPU and the GPU, which provides solutions which are within 9.05% of the optimal solution on an average across the benchmark Suite. The partitioned tasks are then software pipelined to execute oil the multiple CPU cores and the Streaming Multiprocessors (SMs) of the GPU. The software pipelining algorithm orchestrates the execution between CPU cores and the GPU by emitting the code for the CPU and the GPU, and the code for the required data transfers. Our experiments on a platform with 8 CPU cores and a GeForce 8800 GTS 512 GPU show a geometric mean speedup of 6.94X with it maximum of 51.96X over it single threaded CPU execution across the StreamIt benchmarks. This is a 18.9% improvement over it partitioning strategy that maps only the filters that cannot be executed oil the GPU - the filters with state that is persistent across firings - onto the CPU.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Plywood manufacture includes two fundamental stages. The first is to peel or separate logs into veneer sheets of different thicknesses. The second is to assemble veneer sheets into finished plywood products. At the first stage a decision must be made as to the number of different veneer thicknesses to be peeled and what these thicknesses should be. At the second stage, choices must be made as to how these veneers will be assembled into final products to meet certain constraints while minimizing wood loss. These decisions present a fundamental management dilemma. Costs of peeling, drying, storage, handling, etc. can be reduced by decreasing the number of veneer thicknesses peeled. However, a reduced set of thickness options may make it infeasible to produce the variety of products demanded by the market or increase wood loss by requiring less efficient selection of thicknesses for assembly. In this paper the joint problem of veneer choice and plywood construction is formulated as a nonlinear integer programming problem. A relatively simple optimal solution procedure is developed that exploits special problem structure. This procedure is examined on data from a British Columbia plywood mill. Restricted to the existing set of veneer thicknesses and plywood designs used by that mill, the procedure generated a solution that reduced wood loss by 79 percent, thereby increasing net revenue by 6.86 percent. Additional experiments were performed that examined the consequences of changing the number of veneer thicknesses used. Extensions are discussed that permit the consideration of more than one wood species.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Closed-form solutions are presented for approximate equations governing the pulsatile flow of blood through models of mild axisymmetric arterial stenosis, taking into account the effect of arterial distensibility. Results indicate the existence of back-flow regions and the phenomenon of flow-reversal in the cross-sections. The effects of pulsatility of flow and elasticity of vessel wall for arterial blood flow through stenosed vessels are determined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a comparative population dynamics study of three closely related species of buttercups (Ranunculus repens, R. acris, and R. bulbosus). The study is based on an investigation of the behaviour of the seeds in soil under field conditions and a continuous monitoring of survival and reproduction of some 9000 individual plants over a period of 21/2 years in a coastal grassland in North Wales. The data were analysed with the help of an extension of Leslie's matrix method which makes possible an simultaneous treatment of vegetative and sexual reproduction. It was found that R. repens (a) depends more heavily on vegetative as compared with sexual reproduction, (b) shows indications of negatively density-dependent population regulation, and (c) exhibits little variation in population growth rates from site to site and from one year to the next. In contrast, R. bulbosus (a) depends exclusively on sexual reproduction, (b) shows indications of a positively density-dependent population behaviour, and (c) exhibits great variation in population growth rates from site to site and from one year to the next. R. acris exhibits an intermediate behaviour in all these respects. It is suggested that the attributes of R. repens are those expected of a species inhabiting a stable environment, while R. bulbosus exhibits some of the characteristics of a fugitive species.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In cases whazo zotatLon of the seoondazy pztncipal 8tzo,ae axes along tha light path ,exists, it is always poaeible to detezmlna two dizactions along which plane-polazlaad light ,antazlng the model ,amerCe8 as plene-pela~l,aed light fzom the model. Puzth,az the nat zstazdatton Pot any light path is dlff,azant Prom the lntsgtatad zetazd,ation Pat the l£ght path nogZsctlng the ePfsct or z,atation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, pattern classification problem in tool wear monitoring is solved using nature inspired techniques such as Genetic Programming(GP) and Ant-Miner (AM). The main advantage of GP and AM is their ability to learn the underlying data relationships and express them in the form of mathematical equation or simple rules. The extraction of knowledge from the training data set using GP and AM are in the form of Genetic Programming Classifier Expression (GPCE) and rules respectively. The GPCE and AM extracted rules are then applied to set of data in the testing/validation set to obtain the classification accuracy. A major attraction in GP evolved GPCE and AM based classification is the possibility of obtaining an expert system like rules that can be directly applied subsequently by the user in his/her application. The performance of the data classification using GP and AM is as good as the classification accuracy obtained in the earlier study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Regular electrical activation waves in cardiac tissue lead to the rhythmic contraction and expansion of the heart that ensures blood supply to the whole body. Irregularities in the propagation of these activation waves can result in cardiac arrhythmias, like ventricular tachycardia (VT) and ventricular fibrillation (VF), which are major causes of death in the industrialised world. Indeed there is growing consensus that spiral or scroll waves of electrical activation in cardiac tissue are associated with VT, whereas, when these waves break to yield spiral- or scroll-wave turbulence, VT develops into life-threatening VF: in the absence of medical intervention, this makes the heart incapable of pumping blood and a patient dies in roughly two-and-a-half minutes after the initiation of VF. Thus studies of spiral- and scroll-wave dynamics in cardiac tissue pose important challenges for in vivo and in vitro experimental studies and for in silico numerical studies of mathematical models for cardiac tissue. A major goal here is to develop low-amplitude defibrillation schemes for the elimination of VT and VF, especially in the presence of inhomogeneities that occur commonly in cardiac tissue. We present a detailed and systematic study of spiral- and scroll-wave turbulence and spatiotemporal chaos in four mathematical models for cardiac tissue, namely, the Panfilov, Luo-Rudy phase 1 (LRI), reduced Priebe-Beuckelmann (RPB) models, and the model of ten Tusscher, Noble, Noble, and Panfilov (TNNP). In particular, we use extensive numerical simulations to elucidate the interaction of spiral and scroll waves in these models with conduction and ionic inhomogeneities; we also examine the suppression of spiral- and scroll-wave turbulence by low-amplitude control pulses. Our central qualitative result is that, in all these models, the dynamics of such spiral waves depends very sensitively on such inhomogeneities. We also study two types of control chemes that have been suggested for the control of spiral turbulence, via low amplitude current pulses, in such mathematical models for cardiac tissue; our investigations here are designed to examine the efficacy of such control schemes in the presence of inhomogeneities. We find that a local pulsing scheme does not suppress spiral turbulence in the presence of inhomogeneities; but a scheme that uses control pulses on a spatially extended mesh is more successful in the elimination of spiral turbulence. We discuss the theoretical and experimental implications of our study that have a direct bearing on defibrillation, the control of life-threatening cardiac arrhythmias such as ventricular fibrillation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An adaptive drug delivery design is presented in this paper using neural networks for effective treatment of infectious diseases. The generic mathematical model used describes the coupled evolution of concentration of pathogens, plasma cells, antibodies and a numerical value that indicates the relative characteristic of a damaged organ due to the disease under the influence of external drugs. From a system theoretic point of view, the external drugs can be interpreted as control inputs, which can be designed based on control theoretic concepts. In this study, assuming a set of nominal parameters in the mathematical model, first a nonlinear controller (drug administration) is designed based on the principle of dynamic inversion. This nominal drug administration plan was found to be effective in curing "nominal model patients" (patients whose immunological dynamics conform to the mathematical model used for the control design exactly. However, it was found to be ineffective in curing "realistic model patients" (patients whose immunological dynamics may have off-nominal parameter values and possibly unwanted inputs) in general. Hence, to make the drug delivery dosage design more effective for realistic model patients, a model-following adaptive control design is carried out next by taking the help of neural networks, that are trained online. Simulation studies indicate that the adaptive controller proposed in this paper holds promise in killing the invading pathogens and healing the damaged organ even in the presence of parameter uncertainties and continued pathogen attack. Note that the computational requirements for computing the control are very minimal and all associated computations (including the training of neural networks) can be carried out online. However it assumes that the required diagnosis process can be carried out at a sufficient faster rate so that all the states are available for control computation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Combining the advanced techniques of optimal dynamic inversion and model-following neuro-adaptive control design, an innovative technique is presented to design an automatic drug administration strategy for effective treatment of chronic myelogenous leukemia (CML). A recently developed nonlinear mathematical model for cell dynamics is used to design the controller (medication dosage). First, a nominal controller is designed based on the principle of optimal dynamic inversion. This controller can treat the nominal model patients (patients who can be described by the mathematical model used here with the nominal parameter values) effectively. However, since the system parameters for a realistic model patient can be different from that of the nominal model patients, simulation studies for such patients indicate that the nominal controller is either inefficient or, worse, ineffective; i.e. the trajectory of the number of cancer cells either shows non-satisfactory transient behavior or it grows in an unstable manner. Hence, to make the drug dosage history more realistic and patient-specific, a model-following neuro-adaptive controller is augmented to the nominal controller. In this adaptive approach, a neural network trained online facilitates a new adaptive controller. The training process of the neural network is based on Lyapunov stability theory, which guarantees both stability of the cancer cell dynamics as well as boundedness of the network weights. From simulation studies, this adaptive control design approach is found to be very effective to treat the CML disease for realistic patients. Sufficient generality is retained in the mathematical developments so that the technique can be applied to other similar nonlinear control design problems as well.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The StreamIt programming model has been proposed to exploit parallelism in streaming applications on general purpose multi-core architectures. This model allows programmers to specify the structure of a program as a set of filters that act upon data, and a set of communication channels between them. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on modern Graphics Processing Units (GPUs), as they support abundant parallelism in hardware. In this paper, we describe the challenges in mapping StreamIt to GPUs and propose an efficient technique to software pipeline the execution of stream programs on GPUs. We formulate this problem - both scheduling and assignment of filters to processors - as an efficient Integer Linear Program (ILP), which is then solved using ILP solvers. We also describe a novel buffer layout technique for GPUs which facilitates exploiting the high memory bandwidth available in GPUs. The proposed scheduling utilizes both the scalar units in GPU, to exploit data parallelism, and multiprocessors, to exploit task and pipelin parallelism. Further it takes into consideration the synchronization and bandwidth limitations of GPUs, and yields speedups between 1.87X and 36.83X over a single threaded CPU.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A theory and generalized synthesis procedure is advocated for the design of weir notches and orifice-notches having a base in any given shape, to a depth a, such that the discharge through it is proportional to any singular monotonically-increasing function of the depth of flow measured above a certain datum. The problem is reduced to finding an exact solution of a Volterra integral equation in Abel form. The maximization of the depth of the datum below the crest of the notch is investigated. Proof is given that for a weir notch made out of one continuous curve, and for a flow proportional to the mth power of the head, it is impossible to bring the datum lower than (2m − 1)a below the crest of the notch. A new concept of an orifice-notch, having discontinuity in the curve and a division of flow into two distinct portions, is presented. The division of flow is shown to have a beneficial effect in reducing the datum below (2m − 1)a from the crest of the weir and still maintaining the proportionality of the flow. Experimental proof with one such orifice-notch is found to have a constant coefficient of discharge of 0.625. The importance of this analysis in the design of grit chambers is emphasized.