959 resultados para Estimation process


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The radius of an elastic-plastic boundary was measured by the strain gage method around the cold-worked region in L72-aluminum alloy. The relative radial expansion was varied from 2.5 to 6.5 percent during the cold-working process using mandrel and split sleeve. The existing theoretical studies in this area are reviewed. The experimental results are compared with existing experimental data of various investigators and with various theoretical formulations. A model is developed to predict the radius of elastic-plastic boundary, and the model is assessed by comparing with the present experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies the effect of income inequality on economic growth. This is done by analyzing panel data from several countries with both short and long time dimensions of the data. Two of the chapters study the direct effect of inequality on growth, and one chapter also looks at the possible indirect effect of inequality on growth by assessing the effect of inequality on savings. In Chapter two, the effect of inequality on growth is studied by using a panel of 70 countries and a new EHII2008 inequality measure. Chapter contributes on two problems that panel econometric studies on the economic effect of inequality have recently encountered: the comparability problem associated with the commonly used Deininger and Squire s Gini index, and the problem relating to the estimation of group-related elasticities in panel data. In this study, a simple way to 'bypass' vagueness related to the use of parametric methods to estimate group-related parameters is presented. The idea is to estimate the group-related elasticities implicitly using a set of group-related instrumental variables. The estimation results with new data and method indicate that the relationship between income inequality and growth is likely to be non-linear. Chapter three incorporates the EHII2.1 inequality measure and a panel with annual time series observations from 38 countries to test the existence of long-run equilibrium relation(s) between inequality and the level of GDP. Panel unit root tests indicate that both the logarithmic EHII2.1 inequality measure and the logarithmic GDP per capita series are I(1) nonstationary processes. They are also found to be cointegrated of order one, which implies that there is a long-run equilibrium relation between them. The long-run growth elasticity of inequality is found to be negative in the middle-income and rich economies, but the results for poor economies are inconclusive. In the fourth Chapter, macroeconomic data on nine developed economies spanning across four decades starting from the year 1960 is used to study the effect of the changes in the top income share to national and private savings. The income share of the top 1 % of population is used as proxy for the distribution of income. The effect of inequality on private savings is found to be positive in the Nordic and Central-European countries, but for the Anglo-Saxon countries the direction of the effect (positive vs. negative) remains somewhat ambiguous. Inequality is found to have an effect national savings only in the Nordic countries, where it is positive.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of estimating the time-dependent statistical characteristics of a random dynamical system is studied under two different settings. In the first, the system dynamics is governed by a differential equation parameterized by a random parameter, while in the second, this is governed by a differential equation with an underlying parameter sequence characterized by a continuous time Markov chain. We propose, for the first time in the literature, stochastic approximation algorithms for estimating various time-dependent process characteristics of the system. In particular, we provide efficient estimators for quantities such as the mean, variance and distribution of the process at any given time as well as the joint distribution and the autocorrelation coefficient at different times. A novel aspect of our approach is that we assume that information on the parameter model (i.e., its distribution in the first case and transition probabilities of the Markov chain in the second) is not available in either case. This is unlike most other work in the literature that assumes availability of such information. Also, most of the prior work in the literature is geared towards analyzing the steady-state system behavior of the random dynamical system while our focus is on analyzing the time-dependent statistical characteristics which are in general difficult to obtain. We prove the almost sure convergence of our stochastic approximation scheme in each case to the true value of the quantity being estimated. We provide a general class of strongly consistent estimators for the aforementioned statistical quantities with regular sample average estimators being a specific instance of these. We also present an application of the proposed scheme on a widely used model in population biology. Numerical experiments in this framework show that the time-dependent process characteristics as obtained using our algorithm in each case exhibit excellent agreement with exact results. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nonlinear finite element analysis is used for the estimation of damage due to low-velocity impact loading of laminated composite circular plates. The impact loading is treated as an equivalent static loading by assuming the impactor to be spherical and the contact to obey Hertzian law. The stresses in the laminate are calculated using a 48 d.o.f. laminated composite sector element. Subsequently, the Tsai-Wu criterion is used to detect the zones of failure and the maximum stress criterion is used to identify the mode of failure. Then the material properties of the laminate are degraded in the failed regions. The stress analysis is performed again using the degraded properties of the plies. The iterative process is repeated until no more failure is detected in the laminate. The problem of a typical T300/N5208 composite [45 degrees/0 degrees/-45 degrees/90 degrees](s) circular plate being impacted by a spherical impactor is solved and the results are compared with experimental and analytical results available in the literature. The method proposed and the computer code developed can handle symmetric, as well as unsymmetric, laminates. It can be easily extended to cover the impact of composite rectangular plates, shell panels and shells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A study of environmental chloride and groundwater balance has been carried out in order to estimate their relative value for measuring average groundwater recharge under a humid climatic environment with a relatively shallow water table. The hybrid water fluctuation method allowed the split of the hydrologic year into two seasons of recharge (wet season) and no recharge (dry season) to appraise specific yield during the dry season and, second, to estimate recharge from the water table rise during the wet season. This well elaborated and suitable method has then been used as a standard to assess the effectiveness of the chloride method under forest humid climatic environment. Effective specific yield of 0.08 was obtained for the study area. It reflects an effective basin-wide process and is insensitive to local heterogeneities in the aquifer system. The hybrid water fluctuation method gives an average recharge value of 87.14 mm/year at the basin scale, which represents 5.7% of the annual rainfall. Recharge value estimated based on the chloride method varies between 16.24 and 236.95 mm/year with an average value of 108.45 mm/year. It represents 7% of the mean annual precipitation. The discrepancy observed between recharge value estimated by the hybrid water fluctuation and the chloride mass balance methods appears to be very important, which could imply the ineffectiveness of the chloride mass balance method for this present humid environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many problems of state estimation in structural dynamics permit a partitioning of system states into nonlinear and conditionally linear substructures. This enables a part of the problem to be solved exactly, using the Kalman filter, and the remainder using Monte Carlo simulations. The present study develops an algorithm that combines sequential importance sampling based particle filtering with Kalman filtering to a fairly general form of process equations and demonstrates the application of a substructuring scheme to problems of hidden state estimation in structures with local nonlinearities, response sensitivity model updating in nonlinear systems, and characterization of residual displacements in instrumented inelastic structures. The paper also theoretically demonstrates that the sampling variance associated with the substructuring scheme used does not exceed the sampling variance corresponding to the Monte Carlo filtering without substructuring. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new coupled approach is presented for modeling the hydrogen bubble evolution and engulfment during an aluminum alloy solidification process in a micro-scale domain. An explicit enthalpy scheme is used to model the solidification process which is coupled with a level-set method for tracking the hydrogen bubble evolution. The volume averaging techniques are used to model mass, momentum, energy and species conservation equations in the chosen micro-scale domain. The interaction between the solid, liquid and gas interfaces in the system have been studied. Using an order-of-magnitude study on growth rates of bubble and solid interfaces, a criterion is developed to predict bubble elongation which can occur during the engulfment phase. Using this model, we provide further evidence in support of a conceptual thought experiment reported in literature, with regard to estimation of final pore shape as a function of typical casting cooling rates. The results from the proposed model are qualitatively compared with in situ experimental observations reported in literature. The ability of the model to predict growth and movement of a hydrogen bubble and its subsequent engulfment by a solidifying front has been demonstrated for varying average cooling rates encountered in typical sand, permanent mold, and various casting processes. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents the details of estimation of fracture parameters for high strength concrete (HSC, HSC1) and ultra high strength concrete (UHSC). Brief details about characterization of ingredients of HSC, HSC1 and UHSC have been provided. Experiments have been carried out on beams made up of HSC, HSC1 and UHSC considering various sizes and notch depths. Fracture characteristics such as size independent fracture energy (G(f)), size of fracture process zone (C-f), fracture toughness (K-IC) and crack tip opening displacement (CTODc) have been estimated based on the experimental observations. From the studies, it is observed that (i) UHSC has high fracture energy and ductility inspite of having a very low value of C-f; (ii) relatively much more homogeneous than other concretes, because of absence of coarse aggregates and well-graded smaller size particles; (iii) the critical SIF (K-IC) values are increasing with increase of beam depth and decreasing with increase of notch depth. Generally, it can be noted that there is significant increase in fracture toughness and CTODc. They are about 7 times in HSC1 and about 10 times in UHSC compared to those in HSC; (iv) for notch-to-depth ratio 0.1, Bazant's size effect model slightly overestimates the maximum failure loads compared to experimental observations and Karihaloo's model slightly underestimates the maximum failure loads. For the notch-to-depth ratio ranging from 0.2 to 0.4 for the case of UHSC, it can be observed that, both the size effect models predict more or less similar maximum failure loads compared to corresponding experimental values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Granger causality is increasingly being applied to multi-electrode neurophysiological and functional imaging data to characterize directional interactions between neurons and brain regions. For a multivariate dataset, one might be interested in different subsets of the recorded neurons or brain regions. According to the current estimation framework, for each subset, one conducts a separate autoregressive model fitting process, introducing the potential for unwanted variability and uncertainty. In this paper, we propose a multivariate framework for estimating Granger causality. It is based on spectral density matrix factorization and offers the advantage that the estimation of such a matrix needs to be done only once for the entire multivariate dataset. For any subset of recorded data, Granger causality can be calculated through factorizing the appropriate submatrix of the overall spectral density matrix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper addresses the effect of particle size on tar generation in a fixed bed gasification system. Pyrolysis, a diffusion limited process, depends on the heating rate and the surface area of the particle influencing the release of the volatile fraction leaving behind residual char. The flaming time has been estimated for different biomass samples. It is found that the flaming time for wood flakes is almost one fourth than that of coconut shells for same equivalent diameter fuel samples. The particle density of the coconut shell is more than twice that of wood spheres, and almost four times compared with wood flakes; having a significant influence on the flaming time. The ratio of the particle surface area to that of an equivalent diameter is nearly two times higher for flakes compared with wood pieces. Accounting for the density effect, on normalizing with density of the particle, the flaming rate is double in the case of wood flakes or coconut shells compared with the wood sphere for an equivalent diameter. This is due to increased surface area per unit volume of the particle. Experiments are conducted on estimation of tar content in the raw gas for wood flakes and standard wood pieces. It is observed that the tar level in the raw gas is about 80% higher in the case of wood flakes compared with wood pieces. The analysis suggests that the time for pyrolysis is lower with a higher surface area particle and is subjected to fast pyrolysis process resulting in higher tar fraction with low char yield. Increased residence time with staged air flow has a better control on residence time and lower tar in the raw gas. (C) 2014 International Energy Initiative. Published by Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the zero-crossing rate (ZCR) of a Gaussian process and establish a property relating the lagged ZCR (LZCR) to the corresponding normalized autocorrelation function. This is a generalization of Kedem's result for the lag-one case. For the specific case of a sinusoid in white Gaussian noise, we use the higher-order property between lagged ZCR and higher-lag autocorrelation to develop an iterative higher-order autoregressive filtering scheme, which stabilizes the ZCR and consequently provide robust estimates of the lagged autocorrelation. Simulation results show that the autocorrelation estimates converge in about 20 to 40 iterations even for low signal-to-noise ratio.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this research work, we introduce a novel approach for phase estimation from noisy reconstructed interference fields in digital holographic interferometry using an unscented Kalman filter. Unlike conventionally used unwrapping algorithms and piecewise polynomial approximation approaches, this paper proposes, for the first time to the best of our knowledge, a signal tracking approach for phase estimation. The state space model derived in this approach is inspired from the Taylor series expansion of the phase function as the process model, and polar to Cartesian conversion as the measurement model. We have characterized our approach by simulations and validated the performance on experimental data (holograms) recorded under various practical conditions. Our study reveals that the proposed approach, when compared with various phase estimation methods available in the literature, outperforms at lower SNR values (i.e., especially in the range 0-20 dB). It is demonstrated with experimental data as well that the proposed approach is a better choice for estimating rapidly varying phase with high dynamic range and noise. (C) 2014 Optical Society of America

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern technology has allowed real-time data collection in a variety of domains, ranging from environmental monitoring to healthcare. Consequently, there is a growing need for algorithms capable of performing inferential tasks in an online manner, continuously revising their estimates to reflect the current status of the underlying process. In particular, we are interested in constructing online and temporally adaptive classifiers capable of handling the possibly drifting decision boundaries arising in streaming environments. We first make a quadratic approximation to the log-likelihood that yields a recursive algorithm for fitting logistic regression online. We then suggest a novel way of equipping this framework with self-tuning forgetting factors. The resulting scheme is capable of tracking changes in the underlying probability distribution, adapting the decision boundary appropriately and hence maintaining high classification accuracy in dynamic or unstable environments. We demonstrate the scheme's effectiveness in both real and simulated streaming environments. © Springer-Verlag 2009.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present methods for fixed-lag smoothing using Sequential Importance sampling (SIS) on a discrete non-linear, non-Gaussian state space system with unknown parameters. Our particular application is in the field of digital communication systems. Each input data point is taken from a finite set of symbols. We represent transmission media as a fixed filter with a finite impulse response (FIR), hence a discrete state-space system is formed. Conventional Markov chain Monte Carlo (MCMC) techniques such as the Gibbs sampler are unsuitable for this task because they can only perform processing on a batch of data. Data arrives sequentially, so it would seem sensible to process it in this way. In addition, many communication systems are interactive, so there is a maximum level of latency that can be tolerated before a symbol is decoded. We will demonstrate this method by simulation and compare its performance to existing techniques.