127 resultados para physically based modeling
Resumo:
In this paper, we report an analysis of the protein sequence length distribution for 13 bacteria, four archaea and one eukaryote whose genomes have been completely sequenced, The frequency distribution of protein sequence length for all the 18 organisms are remarkably similar, independent of genome size and can be described in terms of a lognormal probability distribution function. A simple stochastic model based on multiplicative processes has been proposed to explain the sequence length distribution. The stochastic model supports the random-origin hypothesis of protein sequences in genomes. Distributions of large proteins deviate from the overall lognormal behavior. Their cumulative distribution follows a power-law analogous to Pareto's law used to describe the income distribution of the wealthy. The protein sequence length distribution in genomes of organisms has important implications for microbial evolution and applications. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
This paper presents a new approach by making use of a hybrid method of using the displacement discontinuity element method and direct boundary element method to model concrete cracking by incorporating fictitious crack model. Fracture mechanics approach is followed using the Hillerborg's fictitious crack model. A boundary element based substructure method and a hybrid technique of using displacement discontinuity element method and direct boundary element method are compared in this paper. In order to represent the process zone ahead of the crack, closing forces are assumed to act in such a way that they obey a linear normal stress-crack opening displacement law. Plain concrete beams with and without initial crack under three-point loading were analyzed by both the methods. The numerical results obtained were shown to agree well with the results from existing finite element method. The model is capable of reproducing the whole range of load-deflection response including strain-softening and snap-back behavior as illustrated in the numerical examples. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Approximate deconvolution modeling is a very recent approach to large eddy simulation of turbulent flows. It has been applied to compressible flows with success. Here, a premixed flame which forms in the wake of a flameholder has been selected to examine the subgrid-scale modeling of reaction rate by this new method because a previous plane two-dimensional simulation of this wake flame, using a wrinkling function and artificial flame thickening, had revealed discrepancies when compared with experiment. The present simulation is of the temporal evolution of a round wakelike flow at two Reynolds numbers, Re = 2000 and 10,000, based on wake defect velocity and wake diameter. A Fourier-spectral code has been used. The reaction is single-step and irreversible, and the rate follows an Arrhenius law. The reference simulation at the lower Reynolds number is fully resolved. At Re = 10,000, subgrid-scale contributions are significant. It was found that subgrid-scale modeling in the present simulation agrees more closely with unresolved subgrid-scale effects observed in experiment. Specifically, the highest contributions appeared in thin folded regions created by vortex convection. The wrinkling function approach had not selected subgrid-scale effects in these regions.
Resumo:
A transient macroscopic model is developed for studying heat and mass transfer in a single-pass laser surface alloying process, with particular emphasis on non-equilibrium solidification considerations. The solution for species concentration distribution requires suitable treatment of non-equilibrium mass transfer conditions. In this context, microscopic features pertaining to non-equilibrium effects on account of solutal undercooling are incorporated through the formulation of a modified partition-coefficient. The effective partition-coefficient is numerically modeled by Means of a number of macroscopically observable parameters related to the solidifying domain. The numerical model is so developed that the modifications on account of non-equilibrium solidification considerations can be conveniently implemented in existing numerical codes based on equilibrium solidification considerations.
Resumo:
We propose a scheme for the compression of tree structured intermediate code consisting of a sequence of trees specified by a regular tree grammar. The scheme is based on arithmetic coding, and the model that works in conjunction with the coder is automatically generated from the syntactical specification of the tree language. Experiments on data sets consisting of intermediate code trees yield compression ratios ranging from 2.5 to 8, for file sizes ranging from 167 bytes to 1 megabyte.
Resumo:
Numerical modeling of several turbulent nonreacting and reacting spray jets is carried out using a fully stochastic separated flow (FSSF) approach. As is widely used, the carrier-phase is considered in an Eulerian framework, while the dispersed phase is tracked in a Lagrangian framework following the stochastic separated flow (SSF) model. Various interactions between the two phases are taken into account by means of two-way coupling. Spray evaporation is described using a thermal model with an infinite conductivity in the liquid phase. The gas-phase turbulence terms are closed using the k-epsilon model. A novel mixture fraction based approach is used to stochastically model the fluctuating temperature and composition in the gas phase and these are then used to refine the estimates of the heat and mass transfer rates between the droplets and the surrounding gas-phase. In classical SSF (CSSF) methods, stochastic fluctuations of only the gas-phase velocity are modeled. Successful implementation of the FSSF approach to turbulent nonreacting and reacting spray jets is demonstrated. Results are compared against experimental measurements as well as with predictions using the CSSF approach for both nonreacting and reacting spray jets. The FSSF approach shows little difference from the CSSF predictions for nonreacting spray jets but differences are significant for reacting spray jets. In general, the FSSF approach gives good predictions of the flame length and structure but further improvements in modeling may be needed to improve the accuracy of some details of the Predictions. (C) 2011 The Combustion Institute. Published by Elsevier Inc. All rights reserved.
Resumo:
Traditional subspace based speech enhancement (SSE)methods use linear minimum mean square error (LMMSE) estimation that is optimal if the Karhunen Loeve transform (KLT) coefficients of speech and noise are Gaussian distributed. In this paper, we investigate the use of Gaussian mixture (GM) density for modeling the non-Gaussian statistics of the clean speech KLT coefficients. Using Gaussian mixture model (GMM), the optimum minimum mean square error (MMSE) estimator is found to be nonlinear and the traditional LMMSE estimator is shown to be a special case. Experimental results show that the proposed method provides better enhancement performance than the traditional subspace based methods.Index Terms: Subspace based speech enhancement, Gaussian mixture density, MMSE estimation.
Resumo:
Effective feature extraction for robust speech recognition is a widely addressed topic and currently there is much effort to invoke non-stationary signal models instead of quasi-stationary signal models leading to standard features such as LPC or MFCC. Joint amplitude modulation and frequency modulation (AM-FM) is a classical non-parametric approach to non-stationary signal modeling and recently new feature sets for automatic speech recognition (ASR) have been derived based on a multi-band AM-FM representation of the signal. We consider several of these representations and compare their performances for robust speech recognition in noise, using the AURORA-2 database. We show that FEPSTRUM representation proposed is more effective than others. We also propose an improvement to FEPSTRUM based on the Teager energy operator (TEO) and show that it can selectively outperform even FEPSTRUM
Resumo:
With extensive use of dynamic voltage scaling (DVS) there is increasing need for voltage scalable models. Similarly, leakage being very sensitive to temperature motivates the need for a temperature scalable model as well. We characterize standard cell libraries for statistical leakage analysis based on models for transistor stacks. Modeling stacks has the advantage of using a single model across many gates there by reducing the number of models that need to be characterized. Our experiments on 15 different gates show that we needed only 23 models to predict the leakage across 126 input vector combinations. We investigate the use of neural networks for the combined PVT model, for the stacks, which can capture the effect of inter die, intra gate variations, supply voltage(0.6-1.2 V) and temperature (0 - 100degC) on leakage. Results show that neural network based stack models can predict the PDF of leakage current across supply voltage and temperature accurately with the average error in mean being less than 2% and that in standard deviation being less than 5% across a range of voltage, temperature.
Resumo:
We present a improved language modeling technique for Lempel-Ziv-Welch (LZW) based LID scheme. The previous approach to LID using LZW algorithm prepares the language pattern table using LZW algorithm. Because of the sequential nature of the LZW algorithm, several language specific patterns of the language were missing in the pattern table. To overcome this, we build a universal pattern table, which contains all patterns of different length. For each language it's corresponding language specific pattern table is constructed by retaining the patterns of the universal table whose frequency of appearance in the training data is above the threshold.This approach reduces the classification score (Compression Ratio [LZW-CR] or the weighted discriminant score[LZW-WDS]) for non native languages and increases the LID performance considerably.
Resumo:
Land cover (LC) refers to what is actually present on the ground and provide insights into the underlying solution for improving the conditions of many issues, from water pollution to sustainable economic development. One of the greatest challenges of modeling LC changes using remotely sensed (RS) data is of scale-resolution mismatch: that the spatial resolution of detail is less than what is required, and that this sub-pixel level heterogeneity is important but not readily knowable. However, many pixels consist of a mixture of multiple classes. The solution to mixed pixel problem typically centers on soft classification techniques that are used to estimate the proportion of a certain class within each pixel. However, the spatial distribution of these class components within the pixel remains unknown. This study investigates Orthogonal Subspace Projection - an unmixing technique and uses pixel-swapping algorithm for predicting the spatial distribution of LC at sub-pixel resolution. Both the algorithms are applied on many simulated and actual satellite images for validation. The accuracy on the simulated images is ~100%, while IRS LISS-III and MODIS data show accuracy of 76.6% and 73.02% respectively. This demonstrates the relevance of these techniques for applications such as urban-nonurban, forest-nonforest classification studies etc.
Resumo:
Rapid urbanisation in India has posed serious challenges to the decision makers in regional planning involving plethora of issues including provision of basic amenities (like electricity, water, sanitation, transport, etc.). Urban planning entails an understanding of landscape and urban dynamics with causal factors. Identifying, delineating and mapping landscapes on temporal scale provide an opportunity to monitor the changes, which is important for natural resource management and sustainable planning activities. Multi-source, multi-sensor, multi-temporal, multi-frequency or multi-polarization remote sensing data with efficient classification algorithms and pattern recognition techniques aid in capturing these dynamics. This paper analyses the landscape dynamics of Greater Bangalore by: (i) characterisation of direct impervious surface, (ii) computation of forest fragmentation indices and (iii) modeling to quantify and categorise urban changes. Linear unmixing is used for solving the mixed pixel problem of coarse resolution super spectral MODIS data for impervious surface characterisation. Fragmentation indices were used to classify forests – interior, perforated, edge, transitional, patch and undetermined. Based on this, urban growth model was developed to determine the type of urban growth – Infill, Expansion and Outlying growth. This helped in visualising urban growth poles and consequence of earlier policy decisions that can help in evolving strategies for effective land use policies.
Resumo:
We extend the recently proposed spectral integration based psychoacoustic model for sinusoidal distortions to the MDCT domain. The estimated masking threshold additionally depends on the sub-band spectral flatness measure of the signal which accounts for the non- sinusoidal distortion introduced by masking. The expressions for masking threshold are derived and the validity of the proposed model is established through perceptual transparency test of audio clips. Test results indicate that we do achieve transparent quality reconstruction with the new model. Performance of the model is compared with MPEG psychoacoustic models with respect to the estimated perceptual entropy (PE). The results show that the proposed model predicts a lower PE than other models.
Resumo:
The main idea proposed in this paper is that in a vertically aligned array of short carbon nanotubes (CNTs) grown on a metal substrate, we consider a frequency dependent electric field, so that the mode-specific propagation of phonons, in correspondence with the strained band structure and the dispersion curves, take place. We perform theoretical calculations to validate this idea with a view of optimizing the field emission behavior of the CNT array. This is the first approach of its kind, and is in contrast to the the conventional approach where a DC bias voltage is applied in order to observe field emission. A first set of experimental results presented in this paper gives a clear indication that phonon-assisted control of field emission current in CNT based thin film diode is possible.