971 resultados para maximum pseudolikelihood (MPL) estimation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

An important question in kernel regression is one of estimating the order and bandwidth parameters from available noisy data. We propose to solve the problem within a risk estimation framework. Considering an independent and identically distributed (i.i.d.) Gaussian observations model, we use Stein's unbiased risk estimator (SURE) to estimate a weighted mean-square error (MSE) risk, and optimize it with respect to the order and bandwidth parameters. The two parameters are thus spatially adapted in such a manner that noise smoothing and fine structure preservation are simultaneously achieved. On the application side, we consider the problem of image restoration from uniform/non-uniform data, and show that the SURE approach to spatially adaptive kernel regression results in better quality estimation compared with its spatially non-adaptive counterparts. The denoising results obtained are comparable to those obtained using other state-of-the-art techniques, and in some scenarios, superior.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Maximum entropy approach to classification is very well studied in applied statistics and machine learning and almost all the methods that exists in literature are discriminative in nature. In this paper, we introduce a maximum entropy classification method with feature selection for large dimensional data such as text datasets that is generative in nature. To tackle the curse of dimensionality of large data sets, we employ conditional independence assumption (Naive Bayes) and we perform feature selection simultaneously, by enforcing a `maximum discrimination' between estimated class conditional densities. For two class problems, in the proposed method, we use Jeffreys (J) divergence to discriminate the class conditional densities. To extend our method to the multi-class case, we propose a completely new approach by considering a multi-distribution divergence: we replace Jeffreys divergence by Jensen-Shannon (JS) divergence to discriminate conditional densities of multiple classes. In order to reduce computational complexity, we employ a modified Jensen-Shannon divergence (JS(GM)), based on AM-GM inequality. We show that the resulting divergence is a natural generalization of Jeffreys divergence to a multiple distributions case. As far as the theoretical justifications are concerned we show that when one intends to select the best features in a generative maximum entropy approach, maximum discrimination using J-divergence emerges naturally in binary classification. Performance and comparative study of the proposed algorithms have been demonstrated on large dimensional text and gene expression datasets that show our methods scale up very well with large dimensional datasets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An extended Kalman filter based generalized state estimation approach is presented in this paper for accurately estimating the states of incoming high-speed targets such as ballistic missiles. A key advantage of this nine-state problem formulation is that it is very much generic and can capture spiraling as well as pure ballistic motion of targets without any change of the target model and the tuning parameters. A new nonlinear model predictive zero-effort-miss based guidance algorithm is also presented in this paper, in which both the zero-effort-miss as well as the time-to-go are predicted more accurately by first propagating the nonlinear target model (with estimated states) and zero-effort interceptor model simultaneously. This information is then used for computing the necessary lateral acceleration. Extensive six-degrees-of-freedom simulation experiments, which include noisy seeker measurements, a nonlinear dynamic inversion based autopilot for the interceptor along with appropriate actuator and sensor models and magnitude and rate saturation limits for the fin deflections, show that near-zero miss distance (i.e., hit-to-kill level performance) can be obtained when these two new techniques are applied together. Comparison studies with an augmented proportional navigation based guidance shows that the proposed model predictive guidance leads to a substantial amount of conservation in the control energy as well.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of time variant reliability analysis of randomly parametered and randomly driven nonlinear vibrating systems is considered. The study combines two Monte Carlo variance reduction strategies into a single framework to tackle the problem. The first of these strategies is based on the application of the Girsanov transformation to account for the randomness in dynamic excitations, and the second approach is fashioned after the subset simulation method to deal with randomness in system parameters. Illustrative examples include study of single/multi degree of freedom linear/non-linear inelastic randomly parametered building frame models driven by stationary/non-stationary, white/filtered white noise support acceleration. The estimated reliability measures are demonstrated to compare well with results from direct Monte Carlo simulations. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The current work addresses the use of producer gas, a bio-derived gaseous alternative fuel, in engines designed for natural gas, derived from diesel engine frames. Impact of the use of producer gas on the general engine performance with specific focus on turbo-charging is addressed. The operation of a particular engine frame with diesel, natural gas and producer gas indicates that the peak load achieved is highest with diesel fuel (in compression ignition mode) followed by natural gas and producer gas (both in spark ignite mode). Detailed analysis of the engine power de-rating on fuelling with natural gas and producer gas indicates that the change in compression ratio (migration from compression to spark ignited mode), difference in mixture calorific value and turbocharger mismatch are the primary contributing factors. The largest de-rating occurs due to turbocharger mismatch. Turbocharger selection and optimization is identified as the strategy to recover the non-thermodynamic power loss, identified as the recovery potential (the loss due to mixture calorific value and turbocharger mismatch) on operating the engine with a fuel different from the base fuel. A turbocharged after-cooled six cylinder, 5.9 l, 90 kWe (diesel rating) engine (12.2 bar BMEP) is available commercially as a naturally aspirated natural gas engine delivering a peak load of 44.0 kWe (6.0 bar BMEP). The engine delivers a load of 27.3 kWe with producer gas under naturally aspirated mode. On charge boosting the engine with a turbocharger similar in configuration to the diesel engine turbocharger, the peak load delivered with producer gas is 36 kWe (4.8 bar BMEP) indicating a de-rating of about 60% over the baseline diesel mode. Estimation of knock limited peak load for producer gas-fuelled operation on the engine frame using a Wiebe function-based zero-dimensional code indicates a knock limited peak load of 76 kWe, indicating the potential to recover about 40 kWe. As a part of the recovery strategy, optimizing the ignition timing for maximum brake torque based on both spark sweep tests and established combustion descriptors and engine-turbocharger matching for producer gas-fuelled operation resulted in a knock limited peak load of 72.8 kWe (9.9 bar BMEP) at a compressor pressure ratio of 2.30. The de-rating of about 17.0 kWe compared to diesel rating is attributed to the reduction in compression ratio. With load recovery, the specific biomass consumption reduces from 1.2 kg/kWh to 1.0 kg/kWh, an improvement of over 16% while the engine thermal efficiency increases from 28% to 32%. The thermodynamic analysis of the compressor and the turbine indicates an isentropic efficiency of 74.5% and 73%, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose FeatureMatch, a generalised approximate nearest-neighbour field (ANNF) computation framework, between a source and target image. The proposed algorithm can estimate ANNF maps between any image pairs, not necessarily related. This generalisation is achieved through appropriate spatial-range transforms. To compute ANNF maps, global colour adaptation is applied as a range transform on the source image. Image patches from the pair of images are approximated using low-dimensional features, which are used along with KD-tree to estimate the ANNF map. This ANNF map is further improved based on image coherency and spatial transforms. The proposed generalisation, enables us to handle a wider range of vision applications, which have not been tackled using the ANNF framework. We illustrate two such applications namely: 1) optic disk detection and 2) super resolution. The first application deals with medical imaging, where we locate optic disks in retinal images using a healthy optic disk image as common target image. The second application deals with super resolution of synthetic images using a common source image as dictionary. We make use of ANNF mappings in both these applications and show experimentally that our proposed approaches are faster and accurate, compared with the state-of-the-art techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Periodic estimation, monitoring and reporting on area under forest and plantation types and afforestation rates are critical to forest and biodiversity conservation, sustainable forest management and for meeting international commitments. This article is aimed at assessing the adequacy of the current monitoring and reporting approach adopted in India in the context of new challenges of conservation and reporting to international conventions and agencies. The analysis shows that the current mode of monitoring and reporting of forest area is inadequate to meet the national and international requirements. India could be potentially over-reporting the area under forests by including many non-forest tree categories such as commercial plantations of coconut, cashew, coffee and rubber, and fruit orchards. India may also be under-reporting deforestation by reporting only gross forest area at the state and national levels. There is a need for monitoring and reporting of forest cover, deforestation and afforestation rates according to categories such as (i) natural/primary forest, (ii) secondary/degraded forests, (iii) forest plantations, (iv) commercial plantations, (v) fruit orchards and (vi) scattered trees.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Climate change impact assessment studies involve downscaling large-scale atmospheric predictor variables (LSAPVs) simulated by general circulation models (GCMs) to site-scale meteorological variables. This article presents a least-square support vector machine (LS-SVM)-based methodology for multi-site downscaling of maximum and minimum daily temperature series. The methodology involves (1) delineation of sites in the study area into clusters based on correlation structure of predictands, (2) downscaling LSAPVs to monthly time series of predictands at a representative site identified in each of the clusters, (3) translation of the downscaled information in each cluster from the representative site to that at other sites using LS-SVM inter-site regression relationships, and (4) disaggregation of the information at each site from monthly to daily time scale using k-nearest neighbour disaggregation methodology. Effectiveness of the methodology is demonstrated by application to data pertaining to four sites in the catchment of Beas river basin, India. Simulations of Canadian coupled global climate model (CGCM3.1/T63) for four IPCC SRES scenarios namely A1B, A2, B1 and COMMIT were downscaled to future projections of the predictands in the study area. Comparison of results with those based on recently proposed multivariate multiple linear regression (MMLR) based downscaling method and multi-site multivariate statistical downscaling (MMSD) method indicate that the proposed method is promising and it can be considered as a feasible choice in statistical downscaling studies. The performance of the method in downscaling daily minimum temperature was found to be better when compared with that in downscaling daily maximum temperature. Results indicate an increase in annual average maximum and minimum temperatures at all the sites for A1B, A2 and B1 scenarios. The projected increment is high for A2 scenario, and it is followed by that for A1B, B1 and COMMIT scenarios. Projections, in general, indicated an increase in mean monthly maximum and minimum temperatures during January to February and October to December.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We estimate the distribution of ice thickness for a Himalayan glacier using surface velocities, slope and the ice flow law. Surface velocities over Gangotri Glacier were estimated using sub-pixel correlation of Landsat TM and ETM+ imagery. Velocities range from similar to 14-85 m a(-1) in the accumulation region to similar to 20-30 ma(-1) near the snout. Depth profiles were calculated using the equation of laminar flow. Thickness varies from similar to 540 m in the upper reaches to similar to 50-60 m near the snout. The volume of the glacier is estimated to be 23.2 +/- 4.2 km(3).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the zero-crossing rate (ZCR) of a Gaussian process and establish a property relating the lagged ZCR (LZCR) to the corresponding normalized autocorrelation function. This is a generalization of Kedem's result for the lag-one case. For the specific case of a sinusoid in white Gaussian noise, we use the higher-order property between lagged ZCR and higher-lag autocorrelation to develop an iterative higher-order autoregressive filtering scheme, which stabilizes the ZCR and consequently provide robust estimates of the lagged autocorrelation. Simulation results show that the autocorrelation estimates converge in about 20 to 40 iterations even for low signal-to-noise ratio.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bending at the valence angle N-C-alpha-C' (tau) is a known control feature for attenuating the stability of the rare intramolecular hydrogen bonded pseudo five-membered ring C-5 structures, the so called 2.0(5) helices, at Aib. The competitive 3(10)-helical structures still predominate over the C5 structures at Aib for most values of tau. However at Aib*, a mimic of Aib where the carbonyl 0 of Aib is replaced with an imidate N (in 5,6-dihydro-4H-1,3-oxazine = Oxa), in the peptidomimic Piv-Pro-Aib*-Oxa (1), the C(5)i structure is persistent in both crystals and in solution. Here we show that the i -> i hydrogen bond energy is a more determinant control for the relative stability of the C5 structure and estimate its value to be 18.5 +/- 0.7 kJ/mol at Aib* in 1, through the computational isodesmic reaction approach, using two independent sets of theoretical isodesmic reactions. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is essential to accurately estimate the working set size (WSS) of an application for various optimizations such as to partition cache among virtual machines or reduce leakage power dissipated in an over-allocated cache by switching it OFF. However, the state-of-the-art heuristics such as average memory access latency (AMAL) or cache miss ratio (CMR) are poorly correlated to the WSS of an application due to 1) over-sized caches and 2) their dispersed nature. Past studies focus on estimating WSS of an application executing on a uniprocessor platform. Estimating the same for a chip multiprocessor (CMP) with a large dispersed cache is challenging due to the presence of concurrently executing threads/processes. Hence, we propose a scalable, highly accurate method to estimate WSS of an application. We call this method ``tagged WSS (TWSS)'' estimation method. We demonstrate the use of TWSS to switch-OFF the over-allocated cache ways in Static and Dynamic NonUniform Cache Architectures (SNUCA, DNUCA) on a tiled CMP. In our implementation of adaptable way SNUCA and DNUCA caches, decision of altering associativity is taken by each L2 controller. Hence, this approach scales better with the number of cores present on a CMP. It gives overall (geometric mean) 26% and 19% higher energy-delay product savings compared to AMAL and CMR heuristics on SNUCA, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accelerated electrothermal aging tests were conducted at a constant temperature of 60 degrees C and at different stress levels of 6 kV/mm, 7 kV/mm and 8 kV/mm on unfilled epoxy and epoxy filled with 5 wt% of nano alumina. The leakage current through the samples were continuously monitored and the variation in tan delta values with aging duration was monitored to predict the impending failure and the time to failure of the samples. It is observed that the time to failure of epoxy alumina nanocomposite samples is significantly higher as compared to the unfilled epoxy. Data from the experiments has been analyzed graphically by plotting the Weibull probability and theoretically by the linear least square regression analysis. The characteristic life obtained from the least square regression analysis has been used to plot the inverse power law curve. From the inverse power law curve, the life of the epoxy insulation with and without nanofiller loading at a stress level of 3 kV/mm, i.e. within the midrange of the design stress level of rotating machine insulation, has been obtained by extrapolation. It is observed that the life of epoxy alumina nanocomposite of 5 wt% filler loading is nine times higher than that of the unfilled epoxy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this research work, we introduce a novel approach for phase estimation from noisy reconstructed interference fields in digital holographic interferometry using an unscented Kalman filter. Unlike conventionally used unwrapping algorithms and piecewise polynomial approximation approaches, this paper proposes, for the first time to the best of our knowledge, a signal tracking approach for phase estimation. The state space model derived in this approach is inspired from the Taylor series expansion of the phase function as the process model, and polar to Cartesian conversion as the measurement model. We have characterized our approach by simulations and validated the performance on experimental data (holograms) recorded under various practical conditions. Our study reveals that the proposed approach, when compared with various phase estimation methods available in the literature, outperforms at lower SNR values (i.e., especially in the range 0-20 dB). It is demonstrated with experimental data as well that the proposed approach is a better choice for estimating rapidly varying phase with high dynamic range and noise. (C) 2014 Optical Society of America