155 resultados para Test Set


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the present paper, the constitutive model is proposed for cemented soils, in which the cementation component and frictional component are treated separately and then added together to get overall response. The modified Cam clay is used to predict the frictional resistance and an elasto-plastic strain softening model is proposed for the cementation component. The rectangular isotropic yield curve proposed by Vatsala (1995) for the bond component has been modified in order to account for the anisotropy generally observed in the case of natural soft cemented soils. In this paper, the model proposed is used to predict the experimental results of extension tests on the soft cemented soils whereas compression test results are presented elsewhere. The model predictions compare quite satisfactorily with the observed response. A few input parameters are required which are well defined and easily determinable and the model uses associated flow rule.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Artificial Neural Networks (ANNs) have recently been proposed as an alterative method for salving certain traditional problems in power systems where conventional techniques have not achieved the desired speed, accuracy or efficiency. This paper presents application of ANN where the aim is to achieve fast voltage stability margin assessment of power network in an energy control centre (ECC), with reduced number of appropriate inputs. L-index has been used for assessing voltage stability margin. Investigations are carried out on the influence of information encompassed in input vector and target out put vector, on the learning time and test performance of multi layer perceptron (MLP) based ANN model. LP based algorithm for voltage stability improvement, is used for generating meaningful training patterns in the normal operating range of the system. From the generated set of training patterns, appropriate training patterns are selected based on statistical correlation process, sensitivity matrix approach, contingency ranking approach and concentric relaxation method. Simulation results on a 24 bus EHV system, 30 bus modified IEEE system, and a 82 bus Indian power network are presented for illustration purposes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the problem of curtailing the number of control actions using fuzzy expert approach for voltage/reactive power dispatch. It presents an approach using fuzzy set theory for reactive power control with the purpose of improving the voltage profile of a power system. To minimize the voltage deviations from pre-desired values of all the load buses, using the sensitivities with respect to reactive power control variables form the basis of the proposed Fuzzy Logic Control (FLC). Control variables considered are switchable VAR compensators, On Load Tap Changing (OLTC) transformers and generator excitations. Voltage deviations and controlling variables are translated into fuzzy set notations to formulate the relation between voltage deviations and controlling ability of controlling devices. The developed fuzzy system is tested on a few simulated practical Indian power systems and modified IEEE-30 bus system. The performance of the fuzzy system is compared with conventional optimization technique and results obtained are encouraging. Results obtained for a modified IEEE-30 bus test system and a 205-node equivalent EHV system a part of Indian southern grid are presented for illustration purposes. The proposed fuzzy-expert technique is found suitable for on-line applications in energy control centre as the solution is obtained fast with significant speedups with few number of controllers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In species-rich assemblages, differential utilization of vertical space can be driven by resource availability. For animals that communicate acoustically over long distances under habitat-induced constraints, access to an effective transmission channel is a valuable resource. The acoustic adaptation hypothesis suggests that habitat acoustics imposes a selective pressure that drives the evolution of both signal structure and choice of calling sites by signalers. This predicts that species-specific signals transmit best in native habitats. In this study, we have tested the hypothesis that vertical stratification of calling heights of acoustically communicating species is driven by acoustic adaptation. This was tested in an assemblage of 12 coexisting species of crickets and katydids in a tropical wet evergreen forest. We carried out transmission experiments using natural calls at different heights from the forest floor to the canopy. We measured signal degradation using 3 different measures: total attenuation, signal-to-noise ratio (SNR), and envelope distortion. Different sets of species supported the hypothesis depending on which attribute of signal degradation was examined. The hypothesis was upheld by 5 species for attenuation and by 3 species each for SNR and envelope distortion. Only 1 species of 12 provided support for the hypothesis by all 3 measures of signal degradation. The results thus provided no overall support for acoustic adaptation as a driver of vertical stratification of coexisting cricket and katydid species.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A careful comparison of the experimental results reported in the literature reveals different variations of the melting temperature even for the same materials. Though there are different theoretical models, thermodynamic model has been extensively used to understand different variations of size-dependent melting of nanoparticles. There are different hypotheses such as homogeneous melting (HMH), liquid nucleation and growth (LNG) and liquid skin melting (LSM) to resolve different variations of melting temperature as reported in the literature. HMH and LNG account for the linear variation where as LSM is applied to understand the nonlinear behaviour in the plot of melting temperature against reciprocal of particle size. However, a bird's eye view reveals that either HMH or LSM has been extensively used by experimentalists. It has also been observed that not a single hypothesis can explain the size-dependent melting in the complete range. Therefore we describe an approach which can predict the plausible hypothesis for a given data set of the size-dependent melting temperature. A variety of data have been analyzed to ascertain the hypothesis and to test the approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The solution of a bivariate population balance equation (PBE) for aggregation of particles necessitates a large 2-d domain to be covered. A correspondingly large number of discretized equations for particle populations on pivots (representative sizes for bins) are solved, although at the end only a relatively small number of pivots are found to participate in the evolution process. In the present work, we initiate solution of the governing PBE on a small set of pivots that can represent the initial size distribution. New pivots are added to expand the computational domain in directions in which the evolving size distribution advances. A self-sufficient set of rules is developed to automate the addition of pivots, taken from an underlying X-grid formed by intersection of the lines of constant composition and constant particle mass. In order to test the robustness of the rule-set, simulations carried out with pivotwise expansion of X-grid are compared with those obtained using sufficiently large fixed X-grids for a number of composition independent and composition dependent aggregation kernels and initial conditions. The two techniques lead to identical predictions, with the former requiring only a fraction of the computational effort. The rule-set automatically reduces aggregation of particles of same composition to a 1-d problem. A midway change in the direction of expansion of domain, effected by the addition of particles of different mean composition, is captured correctly by the rule-set. The evolving shape of a computational domain carries with it the signature of the aggregation process, which can be insightful in complex and time dependent aggregation conditions. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Three dimensional digital model of a representative human kidney is needed for a surgical simulator that is capable of simulating a laparoscopic surgery involving kidney. Buying a three dimensional computer model of a representative human kidney, or reconstructing a human kidney from an image sequence using commercial software, both involve (sometimes significant amount of) money. In this paper, author has shown that one can obtain a three dimensional surface model of human kidney by making use of images from the Visible Human Data Set and a few free software packages (ImageJ, ITK-SNAP, and MeshLab in particular). Images from the Visible Human Data Set, and the software packages used here, both do not cost anything. Hence, the practice of extracting the geometry of a representative human kidney for free, as illustrated in the present work, could be a free alternative to the use of expensive commercial software or to the purchase of a digital model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work aims at dimensional reduction of non-linear isotropic hyperelastic plates in an asymptotically accurate manner. The problem is both geometrically and materially non-linear. The geometric non-linearity is handled by allowing for finite deformations and generalized warping while the material non-linearity is incorporated through hyperelastic material model. The development, based on the Variational Asymptotic Method (VAM) with moderate strains and very small thickness to shortest wavelength of the deformation along the plate reference surface as small parameters, begins with three-dimensional (3-D) non-linear elasticity and mathematically splits the analysis into a one-dimensional (1-D) through-the-thickness analysis and a two-dimensional (2-D) plate analysis. Major contributions of this paper are derivation of closed-form analytical expressions for warping functions and stiffness coefficients and a set of recovery relations to express approximately the 3-D displacement, strain and stress fields. Consistent with the 2-D non-linear constitutive laws, 2-D plate theory and corresponding finite element program have been developed. Validation of present theory is carried out with a standard test case and the results match well. Distributions of 3-D results are provided for another test case. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ensuring reliable operation over an extended period of time is one of the biggest challenges facing present day electronic systems. The increased vulnerability of the components to atmospheric particle strikes poses a big threat in attaining the reliability required for various mission critical applications. Various soft error mitigation methodologies exist to address this reliability challenge. A general solution to this problem is to arrive at a soft error mitigation methodology with an acceptable implementation overhead and error tolerance level. This implementation overhead can then be reduced by taking advantage of various derating effects like logical derating, electrical derating and timing window derating, and/or making use of application redundancy, e. g. redundancy in firmware/software executing on the so designed robust hardware. In this paper, we analyze the impact of various derating factors and show how they can be profitably employed to reduce the hardware overhead to implement a given level of soft error robustness. This analysis is performed on a set of benchmark circuits using the delayed capture methodology. Experimental results show upto 23% reduction in the hardware overhead when considering individual and combined derating factors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, we investigate the performance of a volume integral equation code on BlueGene/L system. Volume integral equation (VIE) is solved for homogeneous and inhomogeneous dielectric objects for radar cross section (RCS) calculation in a highly parallel environment. Pulse basis functions and point matching technique is used to convert the volume integral equation into a set of simultaneous linear equations and is solved using parallel numerical library ScaLAPACK on IBM's distributed-memory supercomputer BlueGene/L by different number of processors to compare the speed-up and test the scalability of the code.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper considers the problem of weak signal detection in the presence of navigation data bits for Global Navigation Satellite System (GNSS) receivers. Typically, a set of partial coherent integration outputs are non-coherently accumulated to combat the effects of model uncertainties such as the presence of navigation data-bits and/or frequency uncertainty, resulting in a sub-optimal test statistic. In this work, the test-statistic for weak signal detection is derived in the presence of navigation data-bits from the likelihood ratio. It is highlighted that averaging the likelihood ratio based test-statistic over the prior distributions of the unknown data bits and the carrier phase uncertainty leads to the conventional Post Detection Integration (PDI) technique for detection. To improve the performance in the presence of model uncertainties, a novel cyclostationarity based sub-optimal PDI technique is proposed. The test statistic is analytically characterized, and shown to be robust to the presence of navigation data-bits, frequency, phase and noise uncertainties. Monte Carlo simulation results illustrate the validity of the theoretical results and the superior performance offered by the proposed detector in the presence of model uncertainties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analysis of high resolution satellite images has been an important research topic for urban analysis. One of the important features of urban areas in urban analysis is the automatic road network extraction. Two approaches for road extraction based on Level Set and Mean Shift methods are proposed. From an original image it is difficult and computationally expensive to extract roads due to presences of other road-like features with straight edges. The image is preprocessed to improve the tolerance by reducing the noise (the buildings, parking lots, vegetation regions and other open spaces) and roads are first extracted as elongated regions, nonlinear noise segments are removed using a median filter (based on the fact that road networks constitute large number of small linear structures). Then road extraction is performed using Level Set and Mean Shift method. Finally the accuracy for the road extracted images is evaluated based on quality measures. The 1m resolution IKONOS data has been used for the experiment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Subsurface lithology and seismic site classification of Lucknow urban center located in the central part of the Indo-Gangetic Basin (IGB) are presented based on detailed shallow subsurface investigations and borehole analysis. These are done by carrying out 47 seismic surface wave tests using multichannel analysis of surface waves (MASW) and 23 boreholes drilled up to 30 m with standard penetration test (SPT) N values. Subsurface lithology profiles drawn from the drilled boreholes show low- to medium-compressibility clay and silty to poorly graded sand available till depth of 30 m. In addition, deeper boreholes (depth >150 m) were collected from the Lucknow Jal Nigam (Water Corporation), Government of Uttar Pradesh to understand deeper subsoil stratification. Deeper boreholes in this paper refer to those with depth over 150 m. These reports show the presence of clay mix with sand and Kankar at some locations till a depth of 150 m, followed by layers of sand, clay, and Kankar up to 400 m. Based on the available details, shallow and deeper cross-sections through Lucknow are presented. Shear wave velocity (SWV) and N-SPT values were measured for the study area using MASW and SPT testing. Measured SWV and N-SPT values for the same locations were found to be comparable. These values were used to estimate 30 m average values of N-SPT (N-30) and SWV (V-s(30)) for seismic site classification of the study area as per the National Earthquake Hazards Reduction Program (NEHRP) soil classification system. Based on the NEHRP classification, the entire study area is classified into site class C and D based on V-s(30) and site class D and E based on N-30. The issue of larger amplification during future seismic events is highlighted for a major part of the study area which comes under site class D and E. Also, the mismatch of site classes based on N-30 and V-s(30) raises the question of the suitability of the NEHRP classification system for the study region. Further, 17 sets of SPT and SWV data are used to develop a correlation between N-SPT and SWV. This represents a first attempt of seismic site classification and correlation between N-SPT and SWV in the Indo-Gangetic Basin.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Power converters burn-in test consumes large amount of energy, which increases the cost of testing, and certification, in medium and high power application. A simple test configuration to test a PWM rectifier induction motor drive, using a Doubly Fed Induction Machine (DFIM) to circulate power back to the grid for burn-in test is presented. The test configuration makes use of only one power electronic converter, which is the converter to be tested. The test method ensures soft synchronization of DFIM and Squirrel Cage Induction Machine (SCIM). A simple volt per hertz control of the drive is sufficient for conducting the test. To synchronize the DFIM with SCIM, the rotor terminal voltage of DFIM is measured and used as an indication of speed mismatch between DFIM and SCIM. The synchronization is done when the DFIM rotor voltage is at its minimum. Analysis of the DFIM characteristics confirms that such a test can be effectively performed with smooth start up and loading of the test setup. After synchronization is obtained, the speed command to SCIM is changed in order to load the setup in motoring or regenerative mode of operation. The experimental results are presented that validates the proposed test method.