18 resultados para 340402 Econometric and Statistical Methods

em Indian Institute of Science - Bangalore - Índia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electrical failure of insulation is known to be an extremal random process wherein nominally identical pro-rated specimens of equipment insulation, at constant stress fail at inordinately different times even under laboratory test conditions. In order to be able to estimate the life of power equipment, it is necessary to run long duration ageing experiments under accelerated stresses, to acquire and analyze insulation specific failure data. In the present work, Resin Impregnated Paper (RIP) a relatively new insulation system of choice used in transformer bushings, is taken as an example. The failure data has been processed using proven statistical methods, both graphical and analytical. The physical model governing insulation failure at constant accelerated stress has been assumed to be based on temperature dependent inverse power law model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conformational preferences of thiocarbonohydrazide (H2NNHCSNHNH2) in its basic and N,N′-diprotonated forms are examined by calculating the barrier to internal rotation around the C---N bonds, using the theoretical LCAO—MO (ab initio and semiempirical CNDO and EHT) methods. The calculated and experimental results are compared with each other and also with values for N,N′-dimethylthiourea which is isoelectronic with thiocarbonohydrazide. The suitability of these methods for studying rotational isomerism seems suspect when lone pair interactions are present.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We compare two popular methods for estimating the power spectrum from short data windows, namely the adaptive multivariate autoregressive (AMVAR) method and the multitaper method. By analyzing a simulated signal (embedded in a background Ornstein-Uhlenbeck noise process) we demonstrate that the AMVAR method performs better at detecting short bursts of oscillations compared to the multitaper method. However, both methods are immune to jitter in the temporal location of the signal. We also show that coherence can still be detected in noisy bivariate time series data by the AMVAR method even if the individual power spectra fail to show any peaks. Finally, using data from two monkeys performing a visuomotor pattern discrimination task, we demonstrate that the AMVAR method is better able to determine the termination of the beta oscillations when compared to the multitaper method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lateral displacement and global stability are the two main stability criteria for soil nail walls. Conventional design methods do not adequately address the deformation behaviour of soil nail walls, owing to the complexity involved in handling a large number of influencing factors. Consequently, limited methods of deformation estimates based on empirical relationships and in situ performance monitoring are available in the literature. It is therefore desirable that numerical techniques and statistical methods are used in order to gain a better insight into the deformation behaviour of soil nail walls. In the present study numerical experiments are conducted using a 2 4 factorial design method. Based on analysis of the maximum lateral deformation and factor-of-safety observations from the numerical experiments, regression models for maximum lateral deformation and factor-of-safety prediction are developed and checked for adequacy. Selection of suitable design factors for the 2 4 factorial design of numerical experiments enabled the use of the proposed regression models over a practical range of soil nail wall heights and in situ soil variability. It is evident from the model adequacy analyses and illustrative example that the proposed regression models provided a reasonably good estimate of the lateral deformation and global factor of safety of the soil nail walls.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Six models (Simulators) are formulated and developed with all possible combinations of pressure and saturation of the phases as primary variables. A comparative study between six simulators with two numerical methods, conventional simultaneous and modified sequential methods are carried out. The results of the numerical models are compared with the laboratory experimental results to study the accuracy of the model especially in heterogeneous porous media. From the study it is observed that the simulator using pressure and saturation of the wetting fluid (PW, SW formulation) is the best among the models tested. Many simulators with nonwetting phase as one of the primary variables did not converge when used along with simultaneous method. Based on simulator 1 (PW, SW formulation), a comparison of different solution methods such as simultaneous method, modified sequential and adaptive solution modified sequential method are carried out on 4 test problems including heterogeneous and randomly heterogeneous problems. It is found that the modified sequential and adaptive solution modified sequential methods could save the memory by half and as also the CPU time required by these methods is very less when compared with that using simultaneous method. It is also found that the simulator with PNW and PW as the primary variable which had problem of convergence using the simultaneous method, converged using both the modified sequential method and also using adaptive solution modified sequential method. The present study indicates that pressure and saturation formulation along with adaptive solution modified sequential method is the best among the different simulators and methods tested.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A study of environmental chloride and groundwater balance has been carried out in order to estimate their relative value for measuring average groundwater recharge under a humid climatic environment with a relatively shallow water table. The hybrid water fluctuation method allowed the split of the hydrologic year into two seasons of recharge (wet season) and no recharge (dry season) to appraise specific yield during the dry season and, second, to estimate recharge from the water table rise during the wet season. This well elaborated and suitable method has then been used as a standard to assess the effectiveness of the chloride method under forest humid climatic environment. Effective specific yield of 0.08 was obtained for the study area. It reflects an effective basin-wide process and is insensitive to local heterogeneities in the aquifer system. The hybrid water fluctuation method gives an average recharge value of 87.14 mm/year at the basin scale, which represents 5.7% of the annual rainfall. Recharge value estimated based on the chloride method varies between 16.24 and 236.95 mm/year with an average value of 108.45 mm/year. It represents 7% of the mean annual precipitation. The discrepancy observed between recharge value estimated by the hybrid water fluctuation and the chloride mass balance methods appears to be very important, which could imply the ineffectiveness of the chloride mass balance method for this present humid environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The three-component chiral derivatization protocols have been developed for H-1, C-13 and F-19 NMR spectroscopic discrimination of chiral diacids by their coordination and self-assembly with optically active (R)-alpha-methylbenzylamine and 2-formylphenylboronic acid or 3-fluoro-2-formylmethylboronic acid. These protocols yield a mixture of diastereomeric imino-boronate esters which are identified by the well-resolved diastereotopic peaks with significant chemical shift differences ranging up to 0.6 and 2.1 ppm in their corresponding H-1 and F-19 NMR spectra, without any racemization or kinetic resolution, thereby enabling the determination of enantiopurity. A protocol has also been developed for discrimination of chiral alpha-methyl amines, using optically pure trans-1,2-cyclohexanedicarboxylic acid in combination with 2-formylphenylboronic acid or 3-fluoro-2-fluoromethylboronic acid. The proposed strategies have been demonstrated on large number of chiral diacids and chiral alpha-methyl amines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Empirical research available on technology transfer initiatives is either North American or European. Literature over the last two decades shows various research objectives such as identifying the variables to be measured and statistical methods to be used in the context of studying university based technology transfer initiatives. AUTM survey data from years 1996 to 2008 provides insightful patterns about the North American technology transfer initiatives, we use this data in our paper. This paper has three sections namely, a comparison of North American Universities with (n=1129) and without Medical Schools (n=786), an analysis of the top 75th percentile of these samples and a DEA analysis of these samples. We use 20 variables. Researchers have attempted to classify university based technology transfer initiative variables into multi-stages, namely, disclosures, patents and license agreements. Using the same approach, however with minor variations, three stages are defined in this paper. The first stage is to do with inputs from R&D expenditure and outputs namely, invention disclosures. The second stage is to do with invention disclosures being the input and patents issued being the output. The third stage is to do with patents issued as an input and technology transfers as outcomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The electronic structure of Nd1-xYxMnO3 (x-0-0.5) is studied using x-ray absorption near-edge structure (XANES) spectroscopy at the Mn K-edge along with the DFT-based LSDA+U and real space cluster calculations. The main edge of the spectra does not show any variation with doping. The pre-edge shows two distinct features which appear well-separated with doping. The intensity of the pre-edge decreases with doping. The theoretical XANES were calculated using real space multiple scattering methods which reproduces the entire experimental spectra at the main edge as well as the pre-edge. Density functional theory calculations are used to obtain the Mn 4p, Mn 3d and O 2p density of states. For x=0, the site-projected density of states at 1.7 eV above Fermi energy shows a singular peak of unoccupied e(g) (spin-up) states which is hybridized Mn 4p and O 2p states. For x=0.5, this feature develops at a higher energy and is highly delocalized and overlaps with the 3d spin-down states which changes the pre-edge intensity. The Mn 4p DOS for both compositions, show considerable difference between the individual p(x), p(y) and p(z)), states. For x=0.5, there is a considerable change in the 4p orbital polarization suggesting changes in the Jahn-Teller effect with doping. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical learning algorithms provide a viable framework for geotechnical engineering modeling. This paper describes two statistical learning algorithms applied for site characterization modeling based on standard penetration test (SPT) data. More than 2700 field SPT values (N) have been collected from 766 boreholes spread over an area of 220 sqkm area in Bangalore. To get N corrected value (N,), N values have been corrected (Ne) for different parameters such as overburden stress, size of borehole, type of sampler, length of connecting rod, etc. In three-dimensional site characterization model, the function N-c=N-c (X, Y, Z), where X, Y and Z are the coordinates of a point corresponding to N, value, is to be approximated in which N, value at any half-space point in Bangalore can be determined. The first algorithm uses least-square support vector machine (LSSVM), which is related to aridge regression type of support vector machine. The second algorithm uses relevance vector machine (RVM), which combines the strengths of kernel-based methods and Bayesian theory to establish the relationships between a set of input vectors and a desired output. The paper also presents the comparative study between the developed LSSVM and RVM model for site characterization. Copyright (C) 2009 John Wiley & Sons,Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Graphenes with varying number of layers can be synthesized by using different strategies. Thus, single-layer graphene is prepared by micromechanical cleavage, reduction of single-layer graphene oxide, chemical vapor deposition and other methods. Few-layer graphenes are synthesized by conversion of nanodiamond, arc discharge of graphite and other methods. In this article, we briefly overview the various synthetic methods and the surface, magnetic and electrical properties of the produced graphenes. Few-layer graphenes exhibit ferromagnetic features along with antiferromagnetic properties, independent of the method of preparation. Aside from the data on electrical conductivity of graphenes and graphene-polymer composites, we also present the field-effect transistor characteristics of graphenes. Only single-layer reduced graphene oxide exhibits ambipolar properties. The interaction of electron donor and acceptor molecules with few-layer graphene samples is examined in detail.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Floquet analysis is widely used for small-order systems (say, order M < 100) to find trim results of control inputs and periodic responses, and stability results of damping levels and frequencies, Presently, however, it is practical neither for design applications nor for comprehensive analysis models that lead to large systems (M > 100); the run time on a sequential computer is simply prohibitive, Accordingly, a massively parallel Floquet analysis is developed with emphasis on large systems, and it is implemented on two SIMD or single-instruction, multiple-data computers with 4096 and 8192 processors, The focus of this development is a parallel shooting method with damped Newton iteration to generate trim results; the Floquet transition matrix (FTM) comes out as a byproduct, The eigenvalues and eigenvectors of the FTM are computed by a parallel QR method, and thereby stability results are generated, For illustration, flap and flap-lag stability of isolated rotors are treated by the parallel analysis and by a corresponding sequential analysis with the conventional shooting and QR methods; linear quasisteady airfoil aerodynamics and a finite-state three-dimensional wake model are used, Computational reliability is quantified by the condition numbers of the Jacobian matrices in Newton iteration, the condition numbers of the eigenvalues and the residual errors of the eigenpairs, and reliability figures are comparable in both the parallel and sequential analyses, Compared to the sequential analysis, the parallel analysis reduces the run time of large systems dramatically, and the reduction increases with increasing system order; this finding offers considerable promise for design and comprehensive-analysis applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current scientific research is characterized by increasing specialization, accumulating knowledge at a high speed due to parallel advances in a multitude of sub-disciplines. Recent estimates suggest that human knowledge doubles every two to three years – and with the advances in information and communication technologies, this wide body of scientific knowledge is available to anyone, anywhere, anytime. This may also be referred to as ambient intelligence – an environment characterized by plentiful and available knowledge. The bottleneck in utilizing this knowledge for specific applications is not accessing but assimilating the information and transforming it to suit the needs for a specific application. The increasingly specialized areas of scientific research often have the common goal of converting data into insight allowing the identification of solutions to scientific problems. Due to this common goal, there are strong parallels between different areas of applications that can be exploited and used to cross-fertilize different disciplines. For example, the same fundamental statistical methods are used extensively in speech and language processing, in materials science applications, in visual processing and in biomedicine. Each sub-discipline has found its own specialized methodologies making these statistical methods successful to the given application. The unification of specialized areas is possible because many different problems can share strong analogies, making the theories developed for one problem applicable to other areas of research. It is the goal of this paper to demonstrate the utility of merging two disparate areas of applications to advance scientific research. The merging process requires cross-disciplinary collaboration to allow maximal exploitation of advances in one sub-discipline for that of another. We will demonstrate this general concept with the specific example of merging language technologies and computational biology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional taxonomy based on morphology has often failed in accurate species identification owing to the occurrence of cryptic species, which are reproductively isolated but morphologically identical. Molecular data have thus been used to complement morphology in species identification. The sexual advertisement calls in several groups of acoustically communicating animals are species-specific and can thus complement molecular data as non-invasive tools for identification. Several statistical tools and automated identifier algorithms have been used to investigate the efficiency of acoustic signals in species identification. Despite a plethora of such methods, there is a general lack of knowledge regarding the appropriate usage of these methods in specific taxa. In this study, we investigated the performance of two commonly used statistical methods, discriminant function analysis (DFA) and cluster analysis, in identification and classification based on acoustic signals of field cricket species belonging to the subfamily Gryllinae. Using a comparative approach we evaluated the optimal number of species and calling song characteristics for both the methods that lead to most accurate classification and identification. The accuracy of classification using DFA was high and was not affected by the number of taxa used. However, a constraint in using discriminant function analysis is the need for a priori classification of songs. Accuracy of classification using cluster analysis, which does not require a priori knowledge, was maximum for 6-7 taxa and decreased significantly when more than ten taxa were analysed together. We also investigated the efficacy of two novel derived acoustic features in improving the accuracy of identification. Our results show that DFA is a reliable statistical tool for species identification using acoustic signals. Our results also show that cluster analysis of acoustic signals in crickets works effectively for species classification and identification.