168 resultados para Mathematical techniques
em Indian Institute of Science - Bangalore - Índia
Resumo:
The free convection problem with nonuniform gravity finds applications in several fields. For example, centrifugal gravity fieldsarisein many rotating machinery applications. A gravity field is also created artificially in an orbital space station by rotation. The effect of nonuniform gravity due to the rotation of isothermal or nonisothermal plates has been studied by several authors [l-5] using various mathematical techniques.
Resumo:
Homogenization of partial differential equations is relatively a new area and has tremendous applications in various branches of engineering sciences like: material science,porous media, study of vibrations of thin structures, composite materials to name a few. Though the material scientists and others had reasonable idea about the homogenization process, it was lacking a good mathematical theory till early seventies. The first proper mathematical procedure was developed in the seventies and later in the last 30 years or so it has flourished in various ways both application wise and mathematically. This is not a full survey article and on the other hand we will not be concentrating on a specialized problem. Indeed, we do indicate certain specialized problems of our interest without much details and that is not the main theme of the article. I plan to give an introductory presentation with the aim of catering to a wider audience. We go through few examples to understand homogenization procedure in a general perspective together with applications. We also present various mathematical techniques available and if possible some details about some of the techniques. A possible definition of homogenization would be that it is a process of understanding a heterogeneous (in-homogeneous) media, where the heterogeneties are at the microscopic level, like in composite materials, by a homogeneous media. In other words, one would like to obtain a homogeneous description of a highly oscillating in-homogeneous media. We also present other generalizations to non linear problems, porous media and so on. Finally, we will like to see a closely related issue of optimal bounds which itself is an independent area of research.
Resumo:
The availability of a reliable bound on an integral involving the square of the modulus of a form factor on the unitarity cut allows one to constrain the form factor at points inside the analyticity domain and its shape parameters, and also to isolate domains on the real axis and in the complex energy plane where zeros are excluded. In this lecture note, we review the mathematical techniques of this formalism in its standard form, known as the method of unitarity bounds, and recent developments which allow us to include information on the phase and modulus along a part of the unitarity cut. We also provide a brief summary of some results that we have obtained in the recent past, which demonstrate the usefulness of the method for precision predictions on the form factors.
Resumo:
In this paper, pattern classification problem in tool wear monitoring is solved using nature inspired techniques such as Genetic Programming(GP) and Ant-Miner (AM). The main advantage of GP and AM is their ability to learn the underlying data relationships and express them in the form of mathematical equation or simple rules. The extraction of knowledge from the training data set using GP and AM are in the form of Genetic Programming Classifier Expression (GPCE) and rules respectively. The GPCE and AM extracted rules are then applied to set of data in the testing/validation set to obtain the classification accuracy. A major attraction in GP evolved GPCE and AM based classification is the possibility of obtaining an expert system like rules that can be directly applied subsequently by the user in his/her application. The performance of the data classification using GP and AM is as good as the classification accuracy obtained in the earlier study.
Resumo:
When a uniform flow of any nature is interrupted, the readjustment of the flow results in concentrations and rare-factions, so that the peak value of the flow parameter will be higher than that which an elementary computation would suggest. When stress flow in a structure is interrupted, there are stress concentrations. These are generally localized and often large, in relation to the values indicated by simple equilibrium calculations. With the advent of the industrial revolution, dynamic and repeated loading of materials had become commonplace in engine parts and fast moving vehicles of locomotion. This led to serious fatigue failures arising from stress concentrations. Also, many metal forming processes, fabrication techniques and weak-link type safety systems benefit substantially from the intelligent use or avoidance, as appropriate, of stress concentrations. As a result, in the last 80 years, the study and and evaluation of stress concentrations has been a primary objective in the study of solid mechanics. Exact mathematical analysis of stress concentrations in finite bodies presents considerable difficulty for all but a few problems of infinite fields, concentric annuli and the like, treated under the presumption of small deformation, linear elasticity. A whole series of techniques have been developed to deal with different classes of shapes and domains, causes and sources of concentration, material behaviour, phenomenological formulation, etc. These include real and complex functions, conformal mapping, transform techniques, integral equations, finite differences and relaxation, and, more recently, the finite element methods. With the advent of large high speed computers, development of finite element concepts and a good understanding of functional analysis, it is now, in principle, possible to obtain with economy satisfactory solutions to a whole range of concentration problems by intelligently combining theory and computer application. An example is the hybridization of continuum concepts with computer based finite element formulations. This new situation also makes possible a more direct approach to the problem of design which is the primary purpose of most engineering analyses. The trend would appear to be clear: the computer will shape the theory, analysis and design.
Resumo:
The basic concepts and techniques involved in the development and analysis of mathematical models for individual neurons and networks of neurons are reviewed. Some of the interesting results obtained from recent work in this field are described. The current status of research in this field in India is discussed
Resumo:
We carry out an extensive numerical study of the dynamics of spiral waves of electrical activation, in the presence of periodic deformation (PD) in two-dimensional simulation domains, in the biophysically realistic mathematical models of human ventricular tissue due to (a) ten-Tusscher and Panfilov (the TP06 model) and (b) ten-Tusscher, Noble, Noble, and Panfilov (the TNNPO4 model). We first consider simulations in cable-type domains, in which we calculate the conduction velocity theta and the wavelength lambda of a plane wave; we show that PD leads to a periodic, spatial modulation of theta and a temporally periodic modulation of lambda; both these modulations depend on the amplitude and frequency of the PD. We then examine three types of initial conditions for both TP06 and TNNPO4 models and show that the imposition of PD leads to a rich variety of spatiotemporal patterns in the transmembrane potential including states with a single rotating spiral (RS) wave, a spiral-turbulence (ST) state with a single meandering spiral, an ST state with multiple broken spirals, and a state SA in which all spirals are absorbed at the boundaries of our simulation domain. We find, for both TP06 and TNNPO4 models, that spiral-wave dynamics depends sensitively on the amplitude and frequency of PD and the initial condition. We examine how these different types of spiral-wave states can be eliminated in the presence of PD by the application of low-amplitude pulses by square- and rectangular-mesh suppression techniques. We suggest specific experiments that can test the results of our simulations.
Resumo:
The main objective of statistical analysis of experi- mental investigations is to make predictions on the basis of mathematical equations so as the number of experiments. Abrasive jet machining (AJM) is an unconventional and novel machining process wherein microabrasive particles are propelled at high veloc- ities on to a workpiece. The resulting erosion can be used for cutting, etching, cleaning, deburring, drilling and polishing. In the study completed by the authors, statistical design of experiments was successfully employed to predict the rate of material removal by AJM. This paper discusses the details of such an approach and the findings.
Resumo:
Remote sensing provides a lucid and effective means for crop coverage identification. Crop coverage identification is a very important technique, as it provides vital information on the type and extent of crop cultivated in a particular area. This information has immense potential in the planning for further cultivation activities and for optimal usage of the available fertile land. As the frontiers of space technology advance, the knowledge derived from the satellite data has also grown in sophistication. Further, image classification forms the core of the solution to the crop coverage identification problem. No single classifier can prove to satisfactorily classify all the basic crop cover mapping problems of a cultivated region. We present in this paper the experimental results of multiple classification techniques for the problem of crop cover mapping of a cultivated region. A detailed comparison of the algorithms inspired by social behaviour of insects and conventional statistical method for crop classification is presented in this paper. These include the Maximum Likelihood Classifier (MLC), Particle Swarm Optimisation (PSO) and Ant Colony Optimisation (ACO) techniques. The high resolution satellite image has been used for the experiments.
Resumo:
Studies of valence bands and core levels of solids by photoelectron spectroscopy are described at length. Satellite phenomena in the core level spectra have been discussed in some detail and it has been pointed out that the intensity of satellites appearing next to metal and ligand core levels critically depends on the metal-ligand overlap. Use of photoelectron spectroscopy in investigating metal-insulator transitions and spin-state transitions in solids is examined. It is shown that relative intensities of metal Auger lines in transition metal oxides and other systems provide valuable information on the valence bands. Occurrence of interatomic Auger transitions in competition with intraatomic transitions is discussed. Applications of electron energy loss spectroscopy and other techniques of electron spectroscopy in the study of gas-solid interactions are briefly presented.
Resumo:
Discharge periods of lead-acid batteries are significantly reduced at subzero centigrade temperatures. The reduction is more than what can he expected due to decreased rates of various processes caused by a lowering of temperature and occurs despite the fact that active materials are available for discharge. It is proposed that the major cause for this is the freezing of the electrolyte. The concentration of acid decreases during battery discharge with a consequent increase in the freezing temperature. A battery freezes when the discharge temperature falls below the freezing temperature. A mathematical model is developed for conditions where charge-transfer reaction is the rate-limiting step. and Tafel kinetics are applicable. It is argued that freezing begins from the midplanes of electrodes and proceeds toward the reservoir in-between. Ionic conduction stops when one of the electrodes freezes fully and the time taken to reach that point, namely the discharge period, is calculated. The predictions of the model compare well to observations made at low current density (C/5) and at -20 and -40 degrees C. At higher current densities, however, diffusional resistances become important and a more complicated moving boundary problem needs to be solved to predict the discharge periods. (C) 2009 The Electrochemical Society.
Resumo:
In this article, several basic swarming laws for Unmanned Aerial Vehicles (UAVs) are developed for both two-dimensional (2D) plane and three-dimensional (3D) space. Effects of these basic laws on the group behaviour of swarms of UAVs are studied. It is shown that when cohesion rule is applied an equilibrium condition is reached in which all the UAVs settle at the same altitude on a circle of constant radius. It is also proved analytically that this equilibrium condition is stable for all values of velocity and acceleration. A decentralised autonomous decision-making approach that achieves collision avoidance without any central authority is also proposed in this article. Algorithms are developed with the help of these swarming laws for two types of collision avoidance, Group-wise and Individual, in 2D plane and 3D space. Effect of various parameters are studied on both types of collision avoidance schemes through extensive simulations.
Resumo:
Lateral or transaxial truncation of cone-beam data can occur either due to the field of view limitation of the scanning apparatus or iregion-of-interest tomography. In this paper, we Suggest two new methods to handle lateral truncation in helical scan CT. It is seen that reconstruction with laterally truncated projection data, assuming it to be complete, gives severe artifacts which even penetrates into the field of view. A row-by-row data completion approach using linear prediction is introduced for helical scan truncated data. An extension of this technique known as windowed linear prediction approach is introduced. Efficacy of the two techniques are shown using simulation with standard phantoms. A quantitative image quality measure of the resulting reconstructed images are used to evaluate the performance of the proposed methods against an extension of a standard existing technique.
Resumo:
In this paper we study two problems in feedback stabilization. The first is the simultaneous stabilization problem, which can be stated as follows. Given plantsG_{0}, G_{1},..., G_{l}, does there exist a single compensatorCthat stabilizes all of them? The second is that of stabilization by a stable compensator, or more generally, a "least unstable" compensator. Given a plantG, we would like to know whether or not there exists a stable compensatorCthat stabilizesG; if not, what is the smallest number of right half-place poles (counted according to their McMillan degree) that any stabilizing compensator must have? We show that the two problems are equivalent in the following sense. The problem of simultaneously stabilizingl + 1plants can be reduced to the problem of simultaneously stabilizinglplants using a stable compensator, which in turn can be stated as the following purely algebraic problem. Given2lmatricesA_{1}, ..., A_{l}, B_{1}, ..., B_{l}, whereA_{i}, B_{i}are right-coprime for alli, does there exist a matrixMsuch thatA_{i} + MB_{i}, is unimodular for alli?Conversely, the problem of simultaneously stabilizinglplants using a stable compensator can be formulated as one of simultaneously stabilizingl + 1plants. The problem of determining whether or not there exists anMsuch thatA + BMis unimodular, given a right-coprime pair (A, B), turns out to be a special case of a question concerning a matrix division algorithm in a proper Euclidean domain. We give an answer to this question, and we believe this result might be of some independent interest. We show that, given twon times mplantsG_{0} and G_{1}we can generically stabilize them simultaneously provided eithernormis greater than one. In contrast, simultaneous stabilizability, of two single-input-single-output plants, g0and g1, is not generic.
Resumo:
In this paper we develop compilation techniques for the realization of applications described in a High Level Language (HLL) onto a Runtime Reconfigurable Architecture. The compiler determines Hyper Operations (HyperOps) that are subgraphs of a data flow graph (of an application) and comprise elementary operations that have strong producer-consumer relationship. These HyperOps are hosted on computation structures that are provisioned on demand at runtime. We also report compiler optimizations that collectively reduce the overheads of data-driven computations in runtime reconfigurable architectures. On an average, HyperOps offer a 44% reduction in total execution time and a 18% reduction in management overheads as compared to using basic blocks as coarse grained operations. We show that HyperOps formed using our compiler are suitable to support data flow software pipelining.