29 resultados para Models and Methods
em Indian Institute of Science - Bangalore - Índia
Resumo:
Floquet analysis is widely used for small-order systems (say, order M < 100) to find trim results of control inputs and periodic responses, and stability results of damping levels and frequencies, Presently, however, it is practical neither for design applications nor for comprehensive analysis models that lead to large systems (M > 100); the run time on a sequential computer is simply prohibitive, Accordingly, a massively parallel Floquet analysis is developed with emphasis on large systems, and it is implemented on two SIMD or single-instruction, multiple-data computers with 4096 and 8192 processors, The focus of this development is a parallel shooting method with damped Newton iteration to generate trim results; the Floquet transition matrix (FTM) comes out as a byproduct, The eigenvalues and eigenvectors of the FTM are computed by a parallel QR method, and thereby stability results are generated, For illustration, flap and flap-lag stability of isolated rotors are treated by the parallel analysis and by a corresponding sequential analysis with the conventional shooting and QR methods; linear quasisteady airfoil aerodynamics and a finite-state three-dimensional wake model are used, Computational reliability is quantified by the condition numbers of the Jacobian matrices in Newton iteration, the condition numbers of the eigenvalues and the residual errors of the eigenpairs, and reliability figures are comparable in both the parallel and sequential analyses, Compared to the sequential analysis, the parallel analysis reduces the run time of large systems dramatically, and the reduction increases with increasing system order; this finding offers considerable promise for design and comprehensive-analysis applications.
Resumo:
This paper presents a novel algorithm for compression of single lead Electrocardiogram (ECG) signals. The method is based on Pole-Zero modelling of the Discrete Cosine Transformed (DCT) signal. An extension is proposed to the well known Steiglitz-Hcbride algorithm, to model the higher frequency components of the input signal more accurately. This is achieved by weighting the error function minimized by the algorithm to estimate the model parameters. The data compression achieved by the parametric model is further enhanced by Differential Pulse Code Modulation (DPCM) of the model parameters. The method accomplishes a compression ratio in the range of 1:20 to 1:40, which far exceeds those achieved by most of the current methods.
Resumo:
We have evaluated techniques of estimating animal density through direct counts using line transects during 1988-92 in the tropical deciduous forests of Mudumalai Sanctuary in southern India for four species of large herbivorous mammals, namely, chital (Axis axis), sambar (Cervus unicolor), Asian elephant (Elephas maximus) and gaur (Bos gauras). Density estimates derived from the Fourier Series and the Half-Normal models consistently had the lowest coefficient of variation. These two models also generated similar mean density estimates. For the Fourier Series estimator, appropriate cut-off widths for analysing line transect data for the four species are suggested. Grouping data into various distance classes did not produce any appreciable differences in estimates of mean density or their variances, although model fit is generally better when data are placed in fewer groups. The sampling effort needed to achieve a desired precision (coefficient of variation) in the density estimate is derived. A sampling effort of 800 km of transects returned a 10% coefficient of variation on estimate for chital; for the other species a higher effort was needed to achieve this level of precision. There was no statistically significant relationship between detectability of a group and the size of the group for any species. Density estimates along roads were generally significantly different from those in the interior af the forest, indicating that road-side counts may not be appropriate for most species.
Resumo:
Predictions of two popular closed-form models for unsaturated hydraulic conductivity (K) are compared with in situ measurements made in a sandy loam field soil. Whereas the Van Genuchten model estimates were very close to field measured values, the Brooks-Corey model predictions were higher by about one order of magnitude in the wetter range. Estimation of parameters of the Van Genuchten soil moisture characteristic (SMC) equation, however, involves the use of non-linear regression techniques. The Brooks-Corey SMC equation has the advantage of being amenable to application of linear regression techniques for estimation of its parameters from retention data. A conversion technique, whereby known Brooks-Corey model parameters may be converted into Van Genuchten model parameters, is formulated. The proposed conversion algorithm may be used to obtain the parameters of the preferred Van Genuchten model from in situ retention data, without the use of non-linear regression techniques.
Resumo:
This paper presents the advanced analytical methodologies such as Double- G and Double - K models for fracture analysis of concrete specimens made up of high strength concrete (HSC, HSC1) and ultra high strength concrete. Brief details about characterization and experimentation of HSC, HSC1 and UHSC have been provided. Double-G model is based on energy concept and couples the Griffith's brittle fracture theory with the bridging softening property of concrete. The double-K fracture model is based on stress intensity factor approach. Various fracture parameters such as cohesive fracture toughness (4), unstable fracture toughness (K-Ic(c)), unstable fracture toughness (K-Ic(un)) and initiation fracture toughness (K-Ic(ini)) have been evaluated based on linear elastic fracture mechanics and nonlinear fracture mechanics principles. Double-G and double-K method uses the secant compliance at the peak point of measured P-CMOD curves for determining the effective crack length. Bi-linear tension softening model has been employed to account for cohesive stresses ahead of the crack tip. From the studies, it is observed that the fracture parameters obtained by using double - G and double - K models are in good agreement with each other. Crack extension resistance has been estimated by using the fracture parameters obtained through double - K model. It is observed that the values of the crack extension resistance at the critical unstable point are almost equal to the values of the unstable fracture toughness K-Ic(un) of the materials. The computed fracture parameters will be useful for crack growth study, remaining life and residual strength evaluation of concrete structural components.
Resumo:
The SUSY Les Houches Accord (SLHA) 2 extended the first SLHA to include various generalisations of the Minimal Supersymmetric Standard Model (MSSM) as well as its simplest next-to-minimal version. Here, we propose further extensions to it, to include the most general and well-established see-saw descriptions (types I/II/III, inverse, and linear) in both an effective and a simple gauged extension of the MSSM framework. In addition, we generalise the PDG numbering scheme to reflect the properties of the particles. (c) 2012 Elsevier B.V. All rights reserved.
Resumo:
Authentication protocols are very much essential for secure communication in mobile ad hoc networks (MANETs). A number of authentication protocols for MANETs have been proposed in the literature which provide the basic authentication service while trying to optimize their performance and resource consumption parameters. A problem with most of these protocols is that the underlying networking environment on which they are applicable have been left unspecified. As a result, lack of specifications about the networking environments applicable to an authentication protocol for MANETs can mislead about the performance and the applicability of the protocol. In this paper, we first characterize networking environment for a MANET as its 'Membership Model' which is defined as a set of specifications related to the 'Membership Granting Server' (MGS) and the 'Membership Set Pattern' (MSP) of the MANET. We then identify various types of possible membership models for a MANET. In order to illustrate that while designing an authentication protocol for a MANET, it is very much necessary to consider the underlying membership model of the MANET, we study a set of six representative authentication protocols, and analyze their applicability for the membership models as enumerated in this paper. The analysis shows that the same protocol may not perform equally well in all membership models. In addition, there may be membership models which are important from the point of view of users, but for which no authentication protocol is available.
Resumo:
Bacteria have evolved to survive the ever-changing environment using intriguing mechanisms of quorum sensing (QS). Very often, QS facilitates formation of biofilm to help bacteria to persist longer and the formation of such biofilms is regulated by c-di-GMP. It is a well-known second messenger also found in mycobacteria. Several methods have been developed to study c-di-GMP signaling pathways in a variety of bacteria. In this review, we have attempted to highlight a connection between c-di-GMP and biofilm formation and QS in mycobacteria and several methods that have helped in better understanding of c-di-GMP signaling. (c) 2014 IUBMB Life, 66(12):823-834, 2014
Resumo:
This paper presents on overview of the issues in precisely defining, specifying and evaluating the dependability of software, particularly in the context of computer controlled process systems. Dependability is intended to be a generic term embodying various quality factors and is useful for both software and hardware. While the developments in quality assurance and reliability theories have proceeded mostly in independent directions for hardware and software systems, we present here the case for developing a unified framework of dependability—a facet of operational effectiveness of modern technological systems, and develop a hierarchical systems model helpful in clarifying this view. In the second half of the paper, we survey the models and methods available for measuring and improving software reliability. The nature of software “bugs”, the failure history of the software system in the various phases of its lifecycle, the reliability growth in the development phase, estimation of the number of errors remaining in the operational phase, and the complexity of the debugging process have all been considered to varying degrees of detail. We also discuss the notion of software fault-tolerance, methods of achieving the same, and the status of other measures of software dependability such as maintainability, availability and safety.
Resumo:
The problem of time variant reliability analysis of existing structures subjected to stationary random dynamic excitations is considered. The study assumes that samples of dynamic response of the structure, under the action of external excitations, have been measured at a set of sparse points on the structure. The utilization of these measurements m in updating reliability models, postulated prior to making any measurements, is considered. This is achieved by using dynamic state estimation methods which combine results from Markov process theory and Bayes' theorem. The uncertainties present in measurements as well as in the postulated model for the structural behaviour are accounted for. The samples of external excitations are taken to emanate from known stochastic models and allowance is made for ability (or lack of it) to measure the applied excitations. The future reliability of the structure is modeled using expected structural response conditioned on all the measurements made. This expected response is shown to have a time varying mean and a random component that can be treated as being weakly stationary. For linear systems, an approximate analytical solution for the problem of reliability model updating is obtained by combining theories of discrete Kalman filter and level crossing statistics. For the case of nonlinear systems, the problem is tackled by combining particle filtering strategies with data based extreme value analysis. In all these studies, the governing stochastic differential equations are discretized using the strong forms of Ito-Taylor's discretization schemes. The possibility of using conditional simulation strategies, when applied external actions are measured, is also considered. The proposed procedures are exemplifiedmby considering the reliability analysis of a few low-dimensional dynamical systems based on synthetically generated measurement data. The performance of the procedures developed is also assessed based on a limited amount of pertinent Monte Carlo simulations. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Six models (Simulators) are formulated and developed with all possible combinations of pressure and saturation of the phases as primary variables. A comparative study between six simulators with two numerical methods, conventional simultaneous and modified sequential methods are carried out. The results of the numerical models are compared with the laboratory experimental results to study the accuracy of the model especially in heterogeneous porous media. From the study it is observed that the simulator using pressure and saturation of the wetting fluid (PW, SW formulation) is the best among the models tested. Many simulators with nonwetting phase as one of the primary variables did not converge when used along with simultaneous method. Based on simulator 1 (PW, SW formulation), a comparison of different solution methods such as simultaneous method, modified sequential and adaptive solution modified sequential method are carried out on 4 test problems including heterogeneous and randomly heterogeneous problems. It is found that the modified sequential and adaptive solution modified sequential methods could save the memory by half and as also the CPU time required by these methods is very less when compared with that using simultaneous method. It is also found that the simulator with PNW and PW as the primary variable which had problem of convergence using the simultaneous method, converged using both the modified sequential method and also using adaptive solution modified sequential method. The present study indicates that pressure and saturation formulation along with adaptive solution modified sequential method is the best among the different simulators and methods tested.
Resumo:
It is shown, in the composite fermion models studied by 't Hooft and others, that the requirements of Adler-Bell-Jackiw anomaly matching and n-independence are sufficient to fix the indices of composite representations. The third requirement, namely that of decoupling relations, follows from these two constraints in such models and hence is inessential.