70 resultados para Industrial automation techniques

em Indian Institute of Science - Bangalore - Índia


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we study two problems in feedback stabilization. The first is the simultaneous stabilization problem, which can be stated as follows. Given plantsG_{0}, G_{1},..., G_{l}, does there exist a single compensatorCthat stabilizes all of them? The second is that of stabilization by a stable compensator, or more generally, a "least unstable" compensator. Given a plantG, we would like to know whether or not there exists a stable compensatorCthat stabilizesG; if not, what is the smallest number of right half-place poles (counted according to their McMillan degree) that any stabilizing compensator must have? We show that the two problems are equivalent in the following sense. The problem of simultaneously stabilizingl + 1plants can be reduced to the problem of simultaneously stabilizinglplants using a stable compensator, which in turn can be stated as the following purely algebraic problem. Given2lmatricesA_{1}, ..., A_{l}, B_{1}, ..., B_{l}, whereA_{i}, B_{i}are right-coprime for alli, does there exist a matrixMsuch thatA_{i} + MB_{i}, is unimodular for alli?Conversely, the problem of simultaneously stabilizinglplants using a stable compensator can be formulated as one of simultaneously stabilizingl + 1plants. The problem of determining whether or not there exists anMsuch thatA + BMis unimodular, given a right-coprime pair (A, B), turns out to be a special case of a question concerning a matrix division algorithm in a proper Euclidean domain. We give an answer to this question, and we believe this result might be of some independent interest. We show that, given twon times mplantsG_{0} and G_{1}we can generically stabilize them simultaneously provided eithernormis greater than one. In contrast, simultaneous stabilizability, of two single-input-single-output plants, g0and g1, is not generic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The heat capacity of a substance is related to the structure and constitution of the material and its measurement is a standard technique of physical investigation. In this review, the classical methods are first analyzed briefly and their recent extensions are summarized. The merits and demerits of these methods are pointed out. The newer techniques such as the a.c. method, the relaxation method, the pulse methods, the laser flash calorimetry and other methods developed to extend the heat capacity measurements to newer classes of materials and to extreme conditions of sample geometry, pressure and temperature are comprehensively reviewed. Examples of recent work and details of the experimental systems are provided for each method. The introduction of automation in control systems for the monitoring of the experiments and for data processing is also discussed. Two hundred and eight references and 18 figures are used to illustrate the various techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of feature selection in a nonparametric unsupervised learning environment is practically undeveloped because no true measure for the effectiveness of a feature exists in such an environment. The lack of a feature selection phase preceding the clustering process seriously affects the reliability of such learning. New concepts such as significant features, level of significance of features, and immediate neighborhood are introduced which result in meeting implicitly the need for feature slection in the context of clustering techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When a uniform flow of any nature is interrupted, the readjustment of the flow results in concentrations and rare-factions, so that the peak value of the flow parameter will be higher than that which an elementary computation would suggest. When stress flow in a structure is interrupted, there are stress concentrations. These are generally localized and often large, in relation to the values indicated by simple equilibrium calculations. With the advent of the industrial revolution, dynamic and repeated loading of materials had become commonplace in engine parts and fast moving vehicles of locomotion. This led to serious fatigue failures arising from stress concentrations. Also, many metal forming processes, fabrication techniques and weak-link type safety systems benefit substantially from the intelligent use or avoidance, as appropriate, of stress concentrations. As a result, in the last 80 years, the study and and evaluation of stress concentrations has been a primary objective in the study of solid mechanics. Exact mathematical analysis of stress concentrations in finite bodies presents considerable difficulty for all but a few problems of infinite fields, concentric annuli and the like, treated under the presumption of small deformation, linear elasticity. A whole series of techniques have been developed to deal with different classes of shapes and domains, causes and sources of concentration, material behaviour, phenomenological formulation, etc. These include real and complex functions, conformal mapping, transform techniques, integral equations, finite differences and relaxation, and, more recently, the finite element methods. With the advent of large high speed computers, development of finite element concepts and a good understanding of functional analysis, it is now, in principle, possible to obtain with economy satisfactory solutions to a whole range of concentration problems by intelligently combining theory and computer application. An example is the hybridization of continuum concepts with computer based finite element formulations. This new situation also makes possible a more direct approach to the problem of design which is the primary purpose of most engineering analyses. The trend would appear to be clear: the computer will shape the theory, analysis and design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper focuses on optimisation algorithms inspired by swarm intelligence for satellite image classification from high resolution satellite multi- spectral images. Amongst the multiple benefits and uses of remote sensing, one of the most important has been its use in solving the problem of land cover mapping. As the frontiers of space technology advance, the knowledge derived from the satellite data has also grown in sophistication. Image classification forms the core of the solution to the land cover mapping problem. No single classifier can prove to satisfactorily classify all the basic land cover classes of an urban region. In both supervised and unsupervised classification methods, the evolutionary algorithms are not exploited to their full potential. This work tackles the land map covering by Ant Colony Optimisation (ACO) and Particle Swarm Optimisation (PSO) which are arguably the most popular algorithms in this category. We present the results of classification techniques using swarm intelligence for the problem of land cover mapping for an urban region. The high resolution Quick-bird data has been used for the experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The prime focus of this study is to design a 50 mm internal diameter diaphragmless shock tube that can be used in an industrial facility for repeated loading of shock waves. The instantaneous rise in pressure and temperature of a medium can be used in a variety of industrial applications. We designed, fabricated and tested three different shock wave generators of which one system employs a highly elastic rubber membrane and the other systems use a fast acting pneumatic valve instead of conventional metal diaphragms. The valve opening speed is obtained with the help of a high speed camera. For shock generation systems with a pneumatic cylinder, it ranges from 0.325 to 1.15 m/s while it is around 8.3 m/s for the rubber membrane. Experiments are conducted using the three diaphragmless systems and the results obtained are analyzed carefully to obtain a relation between the opening speed of the valve and the amount of gas that is actually utilized in the generation of the shock wave for each system. The rubber membrane is not suitable for industrial applications because it needs to be replaced regularly and cannot withstand high driver pressures. The maximum shock Mach number obtained using the new diaphragmless system that uses the pneumatic valve is 2.125 +/- 0.2%. This system shows much promise for automation in an industrial environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An understanding of application I/O access patterns is useful in several situations. First, gaining insight into what applications are doing with their data at a semantic level helps in designing efficient storage systems. Second, it helps create benchmarks that mimic realistic application behavior closely. Third, it enables autonomic systems as the information obtained can be used to adapt the system in a closed loop.All these use cases require the ability to extract the application-level semantics of I/O operations. Methods such as modifying application code to associate I/O operations with semantic tags are intrusive. It is well known that network file system traces are an important source of information that can be obtained non-intrusively and analyzed either online or offline. These traces are a sequence of primitive file system operations and their parameters. Simple counting, statistical analysis or deterministic search techniques are inadequate for discovering application-level semantics in the general case, because of the inherent variation and noise in realistic traces.In this paper, we describe a trace analysis methodology based on Profile Hidden Markov Models. We show that the methodology has powerful discriminatory capabilities that enable it to recognize applications based on the patterns in the traces, and to mark out regions in a long trace that encapsulate sets of primitive operations that represent higher-level application actions. It is robust enough that it can work around discrepancies between training and target traces such as in length and interleaving with other operations. We demonstrate the feasibility of recognizing patterns based on a small sampling of the trace, enabling faster trace analysis. Preliminary experiments show that the method is capable of learning accurate profile models on live traces in an online setting. We present a detailed evaluation of this methodology in a UNIX environment using NFS traces of selected commonly used applications such as compilations as well as on industrial strength benchmarks such as TPC-C and Postmark, and discuss its capabilities and limitations in the context of the use cases mentioned above.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given a parametrized n-dimensional SQL query template and a choice of query optimizer, a plan diagram is a color-coded pictorial enumeration of the execution plan choices of the optimizer over the query parameter space. These diagrams have proved to be a powerful metaphor for the analysis and redesign of modern optimizers, and are gaining currency in diverse industrial and academic institutions. However, their utility is adversely impacted by the impractically large computational overheads incurred when standard brute-force exhaustive approaches are used for producing fine-grained diagrams on high-dimensional query templates. In this paper, we investigate strategies for efficiently producing close approximations to complex plan diagrams. Our techniques are customized to the features available in the optimizer's API, ranging from the generic optimizers that provide only the optimal plan for a query, to those that also support costing of sub-optimal plans and enumerating rank-ordered lists of plans. The techniques collectively feature both random and grid sampling, as well as inference techniques based on nearest-neighbor classifiers, parametric query optimization and plan cost monotonicity. Extensive experimentation with a representative set of TPC-H and TPC-DS-based query templates on industrial-strength optimizers indicates that our techniques are capable of delivering 90% accurate diagrams while incurring less than 15% of the computational overheads of the exhaustive approach. In fact, for full-featured optimizers, we can guarantee zero error with less than 10% overheads. These approximation techniques have been implemented in the publicly available Picasso optimizer visualization tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a novel algorithm for compression of single lead Electrocardiogram (ECG) signals. The method is based on Pole-Zero modelling of the Discrete Cosine Transformed (DCT) signal. An extension is proposed to the well known Steiglitz-Hcbride algorithm, to model the higher frequency components of the input signal more accurately. This is achieved by weighting the error function minimized by the algorithm to estimate the model parameters. The data compression achieved by the parametric model is further enhanced by Differential Pulse Code Modulation (DPCM) of the model parameters. The method accomplishes a compression ratio in the range of 1:20 to 1:40, which far exceeds those achieved by most of the current methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advanced bus-clamping pulse width modulation (ABCPWM) techniques are advantageous in terms of line current distortion and inverter switching loss in voltage source inverter-fed applications. However, the PWM waveforms corresponding to these techniques are not amenable to carrier-based generation. The modulation process in ABCPWM methods is analyzed here from a “per-phase” perspective. It is shown that three sets of descendant modulating functions (or modified modulating functions) can be generated from the three-phase sinusoidal signals. Each set of the modified modulating functions can be used to produce the PWM waveform of a given phase in a computationally efficient manner. Theoretical results and experimental investigations on a 5hp motor drive are presented

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem addressed in this paper is sound, scalable, demand-driven null-dereference verification for Java programs. Our approach consists conceptually of a base analysis, plus two major extensions for enhanced precision. The base analysis is a dataflow analysis wherein we propagate formulas in the backward direction from a given dereference, and compute a necessary condition at the entry of the program for the dereference to be potentially unsafe. The extensions are motivated by the presence of certain ``difficult'' constructs in real programs, e.g., virtual calls with too many candidate targets, and library method calls, which happen to need excessive analysis time to be analyzed fully. The base analysis is hence configured to skip such a difficult construct when it is encountered by dropping all information that has been tracked so far that could potentially be affected by the construct. Our extensions are essentially more precise ways to account for the effect of these constructs on information that is being tracked, without requiring full analysis of these constructs. The first extension is a novel scheme to transmit formulas along certain kinds of def-use edges, while the second extension is based on using manually constructed backward-direction summary functions of library methods. We have implemented our approach, and applied it on a set of real-life benchmarks. The base analysis is on average able to declare about 84% of dereferences in each benchmark as safe, while the two extensions push this number up to 91%. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer Assisted Assessment (CAA) has been existing for several years now. While some forms of CAA do not require sophisticated text understanding (e.g., multiple choice questions), there are also student answers that consist of free text and require analysis of text in the answer. Research towards the latter till date has concentrated on two main sub-tasks: (i) grading of essays, which is done mainly by checking the style, correctness of grammar, and coherence of the essay and (ii) assessment of short free-text answers. In this paper, we present a structured view of relevant research in automated assessment techniques for short free-text answers. We review papers spanning the last 15 years of research with emphasis on recent papers. Our main objectives are two folds. First we present the survey in a structured way by segregating information on dataset, problem formulation, techniques, and evaluation measures. Second we present a discussion on some of the potential future directions in this domain which we hope would be helpful for researchers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A pulsewidth modulation (PWM) technique is proposed for minimizing the rms torque ripple in inverter-fed induction motor drives subject to a given average switching frequency of the inverter. The proposed PWM technique is a combination of optimal continuous modulation and discontinuous modulation. The proposed technique is evaluated both theoretically as well as experimentally and is compared with well-known PWM techniques. It is shown that the proposed method reduces the rms torque ripple by about 30% at the rated speed of the motor drive, compared to conventional space vector PWM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The random early detection (RED) technique has seen a lot of research over the years. However, the functional relationship between RED performance and its parameters viz,, queue weight (omega(q)), marking probability (max(p)), minimum threshold (min(th)) and maximum threshold (max(th)) is not analytically availa ble. In this paper, we formulate a probabilistic constrained optimization problem by assuming a nonlinear relationship between the RED average queue length and its parameters. This problem involves all the RED parameters as the variables of the optimization problem. We use the barrier and the penalty function approaches for its Solution. However (as above), the exact functional relationship between the barrier and penalty objective functions and the optimization variable is not known, but noisy samples of these are available for different parameter values. Thus, for obtaining the gradient and Hessian of the objective, we use certain recently developed simultaneous perturbation stochastic approximation (SPSA) based estimates of these. We propose two four-timescale stochastic approximation algorithms based oil certain modified second-order SPSA updates for finding the optimum RED parameters. We present the results of detailed simulation experiments conducted over different network topologies and network/traffic conditions/settings, comparing the performance of Our algorithms with variants of RED and a few other well known adaptive queue management (AQM) techniques discussed in the literature.