985 resultados para Grassland biomass estimation
Resumo:
The authors consider the channel estimation problem in the context of a linear equaliser designed for a frequency selective channel, which relies on the minimum bit-error-ratio (MBER) optimisation framework. Previous literature has shown that the MBER-based signal detection may outperform its minimum-mean-square-error (MMSE) counterpart in the bit-error-ratio performance sense. In this study, they develop a framework for channel estimation by first discretising the parameter space and then posing it as a detection problem. Explicitly, the MBER cost function (CF) is derived and its performance studied, when transmitting binary phase shift keying (BPSK) and quadrature phase shift keying (QPSK) signals. It is demonstrated that the MBER based CF aided scheme is capable of outperforming existing MMSE, least square-based solutions.
Resumo:
An important question in kernel regression is one of estimating the order and bandwidth parameters from available noisy data. We propose to solve the problem within a risk estimation framework. Considering an independent and identically distributed (i.i.d.) Gaussian observations model, we use Stein's unbiased risk estimator (SURE) to estimate a weighted mean-square error (MSE) risk, and optimize it with respect to the order and bandwidth parameters. The two parameters are thus spatially adapted in such a manner that noise smoothing and fine structure preservation are simultaneously achieved. On the application side, we consider the problem of image restoration from uniform/non-uniform data, and show that the SURE approach to spatially adaptive kernel regression results in better quality estimation compared with its spatially non-adaptive counterparts. The denoising results obtained are comparable to those obtained using other state-of-the-art techniques, and in some scenarios, superior.
Resumo:
An extended Kalman filter based generalized state estimation approach is presented in this paper for accurately estimating the states of incoming high-speed targets such as ballistic missiles. A key advantage of this nine-state problem formulation is that it is very much generic and can capture spiraling as well as pure ballistic motion of targets without any change of the target model and the tuning parameters. A new nonlinear model predictive zero-effort-miss based guidance algorithm is also presented in this paper, in which both the zero-effort-miss as well as the time-to-go are predicted more accurately by first propagating the nonlinear target model (with estimated states) and zero-effort interceptor model simultaneously. This information is then used for computing the necessary lateral acceleration. Extensive six-degrees-of-freedom simulation experiments, which include noisy seeker measurements, a nonlinear dynamic inversion based autopilot for the interceptor along with appropriate actuator and sensor models and magnitude and rate saturation limits for the fin deflections, show that near-zero miss distance (i.e., hit-to-kill level performance) can be obtained when these two new techniques are applied together. Comparison studies with an augmented proportional navigation based guidance shows that the proposed model predictive guidance leads to a substantial amount of conservation in the control energy as well.
Resumo:
The problem of time variant reliability analysis of randomly parametered and randomly driven nonlinear vibrating systems is considered. The study combines two Monte Carlo variance reduction strategies into a single framework to tackle the problem. The first of these strategies is based on the application of the Girsanov transformation to account for the randomness in dynamic excitations, and the second approach is fashioned after the subset simulation method to deal with randomness in system parameters. Illustrative examples include study of single/multi degree of freedom linear/non-linear inelastic randomly parametered building frame models driven by stationary/non-stationary, white/filtered white noise support acceleration. The estimated reliability measures are demonstrated to compare well with results from direct Monte Carlo simulations. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
The paper addresses the effect of particle size on tar generation in a fixed bed gasification system. Pyrolysis, a diffusion limited process, depends on the heating rate and the surface area of the particle influencing the release of the volatile fraction leaving behind residual char. The flaming time has been estimated for different biomass samples. It is found that the flaming time for wood flakes is almost one fourth than that of coconut shells for same equivalent diameter fuel samples. The particle density of the coconut shell is more than twice that of wood spheres, and almost four times compared with wood flakes; having a significant influence on the flaming time. The ratio of the particle surface area to that of an equivalent diameter is nearly two times higher for flakes compared with wood pieces. Accounting for the density effect, on normalizing with density of the particle, the flaming rate is double in the case of wood flakes or coconut shells compared with the wood sphere for an equivalent diameter. This is due to increased surface area per unit volume of the particle. Experiments are conducted on estimation of tar content in the raw gas for wood flakes and standard wood pieces. It is observed that the tar level in the raw gas is about 80% higher in the case of wood flakes compared with wood pieces. The analysis suggests that the time for pyrolysis is lower with a higher surface area particle and is subjected to fast pyrolysis process resulting in higher tar fraction with low char yield. Increased residence time with staged air flow has a better control on residence time and lower tar in the raw gas. (C) 2014 International Energy Initiative. Published by Elsevier Inc. All rights reserved.
Resumo:
The current work addresses the use of producer gas, a bio-derived gaseous alternative fuel, in engines designed for natural gas, derived from diesel engine frames. Impact of the use of producer gas on the general engine performance with specific focus on turbo-charging is addressed. The operation of a particular engine frame with diesel, natural gas and producer gas indicates that the peak load achieved is highest with diesel fuel (in compression ignition mode) followed by natural gas and producer gas (both in spark ignite mode). Detailed analysis of the engine power de-rating on fuelling with natural gas and producer gas indicates that the change in compression ratio (migration from compression to spark ignited mode), difference in mixture calorific value and turbocharger mismatch are the primary contributing factors. The largest de-rating occurs due to turbocharger mismatch. Turbocharger selection and optimization is identified as the strategy to recover the non-thermodynamic power loss, identified as the recovery potential (the loss due to mixture calorific value and turbocharger mismatch) on operating the engine with a fuel different from the base fuel. A turbocharged after-cooled six cylinder, 5.9 l, 90 kWe (diesel rating) engine (12.2 bar BMEP) is available commercially as a naturally aspirated natural gas engine delivering a peak load of 44.0 kWe (6.0 bar BMEP). The engine delivers a load of 27.3 kWe with producer gas under naturally aspirated mode. On charge boosting the engine with a turbocharger similar in configuration to the diesel engine turbocharger, the peak load delivered with producer gas is 36 kWe (4.8 bar BMEP) indicating a de-rating of about 60% over the baseline diesel mode. Estimation of knock limited peak load for producer gas-fuelled operation on the engine frame using a Wiebe function-based zero-dimensional code indicates a knock limited peak load of 76 kWe, indicating the potential to recover about 40 kWe. As a part of the recovery strategy, optimizing the ignition timing for maximum brake torque based on both spark sweep tests and established combustion descriptors and engine-turbocharger matching for producer gas-fuelled operation resulted in a knock limited peak load of 72.8 kWe (9.9 bar BMEP) at a compressor pressure ratio of 2.30. The de-rating of about 17.0 kWe compared to diesel rating is attributed to the reduction in compression ratio. With load recovery, the specific biomass consumption reduces from 1.2 kg/kWh to 1.0 kg/kWh, an improvement of over 16% while the engine thermal efficiency increases from 28% to 32%. The thermodynamic analysis of the compressor and the turbine indicates an isentropic efficiency of 74.5% and 73%, respectively.
Resumo:
In this paper, we propose FeatureMatch, a generalised approximate nearest-neighbour field (ANNF) computation framework, between a source and target image. The proposed algorithm can estimate ANNF maps between any image pairs, not necessarily related. This generalisation is achieved through appropriate spatial-range transforms. To compute ANNF maps, global colour adaptation is applied as a range transform on the source image. Image patches from the pair of images are approximated using low-dimensional features, which are used along with KD-tree to estimate the ANNF map. This ANNF map is further improved based on image coherency and spatial transforms. The proposed generalisation, enables us to handle a wider range of vision applications, which have not been tackled using the ANNF framework. We illustrate two such applications namely: 1) optic disk detection and 2) super resolution. The first application deals with medical imaging, where we locate optic disks in retinal images using a healthy optic disk image as common target image. The second application deals with super resolution of synthetic images using a common source image as dictionary. We make use of ANNF mappings in both these applications and show experimentally that our proposed approaches are faster and accurate, compared with the state-of-the-art techniques.
Resumo:
Periodic estimation, monitoring and reporting on area under forest and plantation types and afforestation rates are critical to forest and biodiversity conservation, sustainable forest management and for meeting international commitments. This article is aimed at assessing the adequacy of the current monitoring and reporting approach adopted in India in the context of new challenges of conservation and reporting to international conventions and agencies. The analysis shows that the current mode of monitoring and reporting of forest area is inadequate to meet the national and international requirements. India could be potentially over-reporting the area under forests by including many non-forest tree categories such as commercial plantations of coconut, cashew, coffee and rubber, and fruit orchards. India may also be under-reporting deforestation by reporting only gross forest area at the state and national levels. There is a need for monitoring and reporting of forest cover, deforestation and afforestation rates according to categories such as (i) natural/primary forest, (ii) secondary/degraded forests, (iii) forest plantations, (iv) commercial plantations, (v) fruit orchards and (vi) scattered trees.
Resumo:
We estimate the distribution of ice thickness for a Himalayan glacier using surface velocities, slope and the ice flow law. Surface velocities over Gangotri Glacier were estimated using sub-pixel correlation of Landsat TM and ETM+ imagery. Velocities range from similar to 14-85 m a(-1) in the accumulation region to similar to 20-30 ma(-1) near the snout. Depth profiles were calculated using the equation of laminar flow. Thickness varies from similar to 540 m in the upper reaches to similar to 50-60 m near the snout. The volume of the glacier is estimated to be 23.2 +/- 4.2 km(3).
Resumo:
We consider the zero-crossing rate (ZCR) of a Gaussian process and establish a property relating the lagged ZCR (LZCR) to the corresponding normalized autocorrelation function. This is a generalization of Kedem's result for the lag-one case. For the specific case of a sinusoid in white Gaussian noise, we use the higher-order property between lagged ZCR and higher-lag autocorrelation to develop an iterative higher-order autoregressive filtering scheme, which stabilizes the ZCR and consequently provide robust estimates of the lagged autocorrelation. Simulation results show that the autocorrelation estimates converge in about 20 to 40 iterations even for low signal-to-noise ratio.
Resumo:
Bending at the valence angle N-C-alpha-C' (tau) is a known control feature for attenuating the stability of the rare intramolecular hydrogen bonded pseudo five-membered ring C-5 structures, the so called 2.0(5) helices, at Aib. The competitive 3(10)-helical structures still predominate over the C5 structures at Aib for most values of tau. However at Aib*, a mimic of Aib where the carbonyl 0 of Aib is replaced with an imidate N (in 5,6-dihydro-4H-1,3-oxazine = Oxa), in the peptidomimic Piv-Pro-Aib*-Oxa (1), the C(5)i structure is persistent in both crystals and in solution. Here we show that the i -> i hydrogen bond energy is a more determinant control for the relative stability of the C5 structure and estimate its value to be 18.5 +/- 0.7 kJ/mol at Aib* in 1, through the computational isodesmic reaction approach, using two independent sets of theoretical isodesmic reactions. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
It is essential to accurately estimate the working set size (WSS) of an application for various optimizations such as to partition cache among virtual machines or reduce leakage power dissipated in an over-allocated cache by switching it OFF. However, the state-of-the-art heuristics such as average memory access latency (AMAL) or cache miss ratio (CMR) are poorly correlated to the WSS of an application due to 1) over-sized caches and 2) their dispersed nature. Past studies focus on estimating WSS of an application executing on a uniprocessor platform. Estimating the same for a chip multiprocessor (CMP) with a large dispersed cache is challenging due to the presence of concurrently executing threads/processes. Hence, we propose a scalable, highly accurate method to estimate WSS of an application. We call this method ``tagged WSS (TWSS)'' estimation method. We demonstrate the use of TWSS to switch-OFF the over-allocated cache ways in Static and Dynamic NonUniform Cache Architectures (SNUCA, DNUCA) on a tiled CMP. In our implementation of adaptable way SNUCA and DNUCA caches, decision of altering associativity is taken by each L2 controller. Hence, this approach scales better with the number of cores present on a CMP. It gives overall (geometric mean) 26% and 19% higher energy-delay product savings compared to AMAL and CMR heuristics on SNUCA, respectively.