58 resultados para Statistical modeling technique
em Indian Institute of Science - Bangalore - Índia
Resumo:
We propose a scheme for the compression of tree structured intermediate code consisting of a sequence of trees specified by a regular tree grammar. The scheme is based on arithmetic coding, and the model that works in conjunction with the coder is automatically generated from the syntactical specification of the tree language. Experiments on data sets consisting of intermediate code trees yield compression ratios ranging from 2.5 to 8, for file sizes ranging from 167 bytes to 1 megabyte.
Resumo:
With the rapid scaling down of the semiconductor process technology, the process variation aware circuit design has become essential today. Several statistical models have been proposed to deal with the process variation. We propose an accurate BSIM model for handling variability in 45nm CMOS technology. The MOSFET is designed to meet the specification of low standby power technology of International Technology Roadmap for Semiconductors (ITRS).The process parameters variation of annealing temperature, oxide thickness, halo dose and title angle of halo implant are considered for the model development. One parameter variation at a time is considered for developing the model. The model validation is done by performance matching with device simulation results and reported error is less than 10%.© (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Resumo:
Quantitative use of satellite-derived rainfall products for various scientific applications often requires them to be accompanied with an error estimate. Rainfall estimates inferred from low earth orbiting satellites like the Tropical Rainfall Measuring Mission (TRMM) will be subjected to sampling errors of nonnegligible proportions owing to the narrow swath of satellite sensors coupled with a lack of continuous coverage due to infrequent satellite visits. The authors investigate sampling uncertainty of seasonal rainfall estimates from the active sensor of TRMM, namely, Precipitation Radar (PR), based on 11 years of PR 2A25 data product over the Indian subcontinent. In this paper, a statistical bootstrap technique is investigated to estimate the relative sampling errors using the PR data themselves. Results verify power law scaling characteristics of relative sampling errors with respect to space-time scale of measurement. Sampling uncertainty estimates for mean seasonal rainfall were found to exhibit seasonal variations. To give a practical example of the implications of the bootstrap technique, PR relative sampling errors over a subtropical river basin of Mahanadi, India, are examined. Results reveal that the bootstrap technique incurs relative sampling errors < 33% (for the 2 degrees grid), < 36% (for the 1 degrees grid), < 45% (for the 0.5 degrees grid), and < 57% (for the 0.25 degrees grid). With respect to rainfall type, overall sampling uncertainty was found to be dominated by sampling uncertainty due to stratiform rainfall over the basin. The study compares resulting error estimates to those obtained from latin hypercube sampling. Based on this study, the authors conclude that the bootstrap approach can be successfully used for ascertaining relative sampling errors offered by TRMM-like satellites over gauged or ungauged basins lacking in situ validation data. This technique has wider implications for decision making before incorporating microwave orbital data products in basin-scale hydrologic modeling.
Resumo:
A technique is developed to study random vibration of nonlinear systems. The method is based on the assumption that the joint probability density function of the response variables and input variables is Gaussian. It is shown that this method is more general than the statistical linearization technique in that it can handle non-Gaussian excitations and amplitude-limited responses. As an example a bilinear hysteretic system under white noise excitation is analyzed. The prediction of various response statistics by this technique is in good agreement with other available results.
Resumo:
We address the problem of robust formant tracking in continuous speech in the presence of additive noise. We propose a new approach based on mixture modeling of the formant contours. Our approach consists of two main steps: (i) Computation of a pyknogram based on multiband amplitude-modulation/frequency-modulation (AM/FM) decomposition of the input speech; and (ii) Statistical modeling of the pyknogram using mixture models. We experiment with both Gaussian mixture model (GMM) and Student's-t mixture model (tMM) and show that the latter is robust with respect to handling outliers in the pyknogram data, parameter selection, accuracy, and smoothness of the estimated formant contours. Experimental results on simulated data as well as noisy speech data show that the proposed tMM-based approach is also robust to additive noise. We present performance comparisons with a recently developed adaptive filterbank technique proposed in the literature and the classical Burg's spectral estimator technique, which show that the proposed technique is more robust to noise.
Resumo:
Carbon Nanotubes (CNTs) grown on substrates are potential electron sources in field emission applications. Several studies have reported the use of CNTs in field emission devices, including field emission displays, X-ray tube, electron microscopes, cathode-ray lamps, etc. Also, in recent years, conventional cold field emission cathodes have been realized in micro-fabricated arrays for medical X-ray imaging. CNTbased field emission cathode devices have potential applications in a variety of industrial and medical applications, including cancer treatment. Field emission performance of a single isolated CNT is found to be remarkable, but the situation becomes complex when an array of CNTs is used. At the same time, use of arrays of CNTs is practical and economical. Indeed, such arrays on cathode substrates can be grown easily and their collective dynamics can be utilized in a statistical sense such that the average emission intensity is high enough and the collective dynamics lead to longer emission life. The authors in their previous publications had proposed a novel approach to obtain stabilized field emission current from a stacked CNT array of pointed height distribution. A mesoscopic modeling technique was employed, which took into account electro-mechanical forces in the CNTs, as well as transport of conduction electron coupled with electron phonon induced heat generation from the CNT tips. The reported analysis of pointed arrangements of the array showed that the current density distribution was greatly localized in the middle of the array, the scatter due to electrodynamic force field was minimized, and the temperature transients were much smaller compared to those in an array with random height distribution. In the present paper we develop a method to compute the emission efficiency of the CNT array in terms of the amount of electrons hitting the anode surface using trajectory calculations. Effects of secondary electron emission and parasitic capacitive nonlinearity on the current-voltage signals are accounted. Field emission efficiency of a stacked CNT array with various pointed height distributions are compared to that of arrays with random and uniform height distributions. Effect of this parasitic nonlinearity on the emission switch-on voltage is estimated by model based simulation and Monte Carlo method.
Resumo:
Significant changes are reported in extreme rainfall characteristics over India in recent studies though there are disagreements on the spatial uniformity and causes of trends. Based on recent theoretical advancements in the Extreme Value Theory (EVT), we analyze changes in extreme rainfall characteristics over India using a high-resolution daily gridded (1 degrees latitude x 1 degrees longitude) dataset. Intensity, duration and frequency of excess rain over a high threshold in the summer monsoon season are modeled by non-stationary distributions whose parameters vary with physical covariates like the El-Nino Southern Oscillation index (ENSO-index) which is an indicator of large-scale natural variability, global average temperature which is an indicator of human-induced global warming and local mean temperatures which possibly indicate more localized changes. Each non-stationary model considers one physical covariate and the best chosen statistical model at each rainfall grid gives the most significant physical driver for each extreme rainfall characteristic at that grid. Intensity, duration and frequency of extreme rainfall exhibit non-stationarity due to different drivers and no spatially uniform pattern is observed in the changes in them across the country. At most of the locations, duration of extreme rainfall spells is found to be stationary, while non-stationary associations between intensity and frequency and local changes in temperature are detected at a large number of locations. This study presents the first application of nonstationary statistical modeling of intensity, duration and frequency of extreme rainfall over India. The developed models are further used for rainfall frequency analysis to show changes in the 100-year extreme rainfall event. Our findings indicate the varying nature of each extreme rainfall characteristic and their drivers and emphasize the necessity of a comprehensive framework to assess resulting risks of precipitation induced flooding. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
We present a improved language modeling technique for Lempel-Ziv-Welch (LZW) based LID scheme. The previous approach to LID using LZW algorithm prepares the language pattern table using LZW algorithm. Because of the sequential nature of the LZW algorithm, several language specific patterns of the language were missing in the pattern table. To overcome this, we build a universal pattern table, which contains all patterns of different length. For each language it's corresponding language specific pattern table is constructed by retaining the patterns of the universal table whose frequency of appearance in the training data is above the threshold.This approach reduces the classification score (Compression Ratio [LZW-CR] or the weighted discriminant score[LZW-WDS]) for non native languages and increases the LID performance considerably.
Resumo:
In this paper we report a modeling technique and analysis of wave dispersion in a cellular composite laminate with spatially modulated microstructure, which can be modeled by parameterization and homogenization in an appropriate length scale. Higher order beam theory is applied and the system of wave equations are derived. Homogenization of these equations are carried out in the scale of wavelength and frequency of the individual wave modes. Smaller scale scattering below the order of cell size are filtered out in the present approach. The longitudinal dispersion relations for different values of a modulation parameter are analyzed which indicates the existence of stop and pass band patterns. Dispersion relations for flexural-shear case are also analyzed which indicates a tendency toward forming the stop and pass bands for increasing values of a shear stiffness modulation parameter. The effect the phase angle (θ) of the incident wave indicates the existence more number of alternative stop bands and pass bands for θ = 45°.
Resumo:
Detecting and quantifying the presence of human-induced climate change in regional hydrology is important for studying the impacts of such changes on the water resources systems as well as for reliable future projections and policy making for adaptation. In this article a formal fingerprint-based detection and attribution analysis has been attempted to study the changes in the observed monsoon precipitation and streamflow in the rain-fed Mahanadi River Basin in India, considering the variability across different climate models. This is achieved through the use of observations, several climate model runs, a principal component analysis and regression based statistical downscaling technique, and a Genetic Programming based rainfall-runoff model. It is found that the decreases in observed hydrological variables across the second half of the 20th century lie outside the range that is expected from natural internal variability of climate alone at 95% statistical confidence level, for most of the climate models considered. For several climate models, such changes are consistent with those expected from anthropogenic emissions of greenhouse gases. However, unequivocal attribution to human-induced climate change cannot be claimed across all the climate models and uncertainties in our detection procedure, arising out of various sources including the use of models, cannot be ruled out. Changes in solar irradiance and volcanic activities are considered as other plausible natural external causes of climate change. Time evolution of the anthropogenic climate change ``signal'' in the hydrological observations, above the natural internal climate variability ``noise'' shows that the detection of the signal is achieved earlier in streamflow as compared to precipitation for most of the climate models, suggesting larger impacts of human-induced climate change on streamflow than precipitation at the river basin scale.
Resumo:
A generalized technique is proposed for modeling the effects of process variations on dynamic power by directly relating the variations in process parameters to variations in dynamic power of a digital circuit. The dynamic power of a 2-input NAND gate is characterized by mixed-mode simulations, to be used as a library element for 65mn gate length technology. The proposed methodology is demonstrated with a multiplier circuit built using the NAND gate library, by characterizing its dynamic power through Monte Carlo analysis. The statistical technique of Response. Surface Methodology (RSM) using Design of Experiments (DOE) and Least Squares Method (LSM), are employed to generate a "hybrid model" for gate power to account for simultaneous variations in multiple process parameters. We demonstrate that our hybrid model based statistical design approach results in considerable savings in the power budget of low power CMOS designs with an error of less than 1%, with significant reductions in uncertainty by atleast 6X on a normalized basis, against worst case design.
Resumo:
Magnetorheological dampers are intrinsically nonlinear devices, which make the modeling and design of a suitable control algorithm an interesting and challenging task. To evaluate the potential of magnetorheological (MR) dampers in control applications and to take full advantages of its unique features, a mathematical model to accurately reproduce its dynamic behavior has to be developed and then a proper control strategy has to be taken that is implementable and can fully utilize their capabilities as a semi-active control device. The present paper focuses on both the aspects. First, the paper reports the testing of a magnetorheological damper with an universal testing machine, for a set of frequency, amplitude, and current. A modified Bouc-Wen model considering the amplitude and input current dependence of the damper parameters has been proposed. It has been shown that the damper response can be satisfactorily predicted with this model. Second, a backstepping based nonlinear current monitoring of magnetorheological dampers for semi-active control of structures under earthquakes has been developed. It provides a stable nonlinear magnetorheological damper current monitoring directly based on system feedback such that current change in magnetorheological damper is gradual. Unlike other MR damper control techniques available in literature, the main advantage of the proposed technique lies in its current input prediction directly based on system feedback and smooth update of input current. Furthermore, while developing the proposed semi-active algorithm, the dynamics of the supplied and commanded current to the damper has been considered. The efficiency of the proposed technique has been shown taking a base isolated three story building under a set of seismic excitation. Comparison with widely used clipped-optimal strategy has also been shown.
Resumo:
Two algorithms are outlined, each of which has interesting features for modeling of spatial variability of rock depth. In this paper, reduced level of rock at Bangalore, India, is arrived from the 652 boreholes data in the area covering 220 sqa <.km. Support vector machine (SVM) and relevance vector machine (RVM) have been utilized to predict the reduced level of rock in the subsurface of Bangalore and to study the spatial variability of the rock depth. The support vector machine (SVM) that is firmly based on the theory of statistical learning theory uses regression technique by introducing epsilon-insensitive loss function has been adopted. RVM is a probabilistic model similar to the widespread SVM, but where the training takes place in a Bayesian framework. Prediction results show the ability of learning machine to build accurate models for spatial variability of rock depth with strong predictive capabilities. The paper also highlights the capability ofRVM over the SVM model.
Resumo:
A hybrid technique to model two dimensional fracture problems which makes use of displacement discontinuity and direct boundary element method is presented. Direct boundary element method is used to model the finite domain of the body, while displacement discontinuity elements are utilized to represent the cracks. Thus the advantages of the component methods are effectively combined. This method has been implemented in a computer program and numerical results which show the accuracy of the present method are presented. The cases of bodies containing edge cracks as well as multiple cracks are considered. A direct method and an iterative technique are described. The present hybrid method is most suitable for modeling problems invoking crack propagation.