989 resultados para Load estimation
Resumo:
This paper addresses the problem of intercepting highly maneuverable threats using seeker-less interceptors that operate in the command guidance mode. These systems are more prone to estimation errors than standard seeker-based systems. In this paper, an integrated estimation/guidance (IEG) algorithm, which combines interactive multiple model (IMM) estimator with differential game guidance law (DGL), is proposed for seeker-less interception. In this interception scenario, the target performs an evasive bang-bang maneuver, while the sensor has noisy measurements and the interceptor is subject to acceleration bound. The IMM serves as a basis for the synthesis of efficient filters for tracking maneuvering targets and reducing estimation errors. The proposed game-based guidance law for two-dimensional interception, later extended to three-dimensional interception scenarios, is used to improve the endgame performance of the command-guided seeker-less interceptor. The IMM scheme and an optimal selection of filters, to cater to various maneuvers that are expected during the endgame, are also described. Furthermore, a chatter removal algorithm is introduced, thus modifying the differential game guidance law (modified DGL). A comparison between modified DGL guidance law and conventional proportional navigation guidance law demonstrates significant improvement in miss distance in a pursuer-evader scenario. Simulation results are also presented for varying flight path angle errors. A numerical study is provided which demonstrates the performance of the combined interactive multiple model with game-based guidance law (IMM/DGL). Simulation study is also carried out for combined IMM and modified DGL (IMM/modified DGL) which exhibits the superior performance and viability of the algorithm reducing the chattering phenomenon. The results are illustrated by an extensive Monte Carlo simulation study in the presence of estimation errors.
Resumo:
India's energy demand is increasing rapidly with the intensive growth of economy. The electricity demand in India exceeded the availability, both in terms of base load energy and peak availability. The efficient use of energy source and its conversion and utilizations are the viable alternatives available to the utilities or industry. There are essentially two approaches to electrical energy management. First at the supply / utility end (Supply Side Management or SSM) and the other at the consumer end (Demand Side Management or DSM). This work is based on Supply Side Management (SSM) protocol and consists of design, fabrication and testing of a control device that will be able to automatically regulate the power flow to an individual consumer's premise. This control device can monitor the overuse of electricity (above the connected load or contracted demand) by the individual consumers. The present project work specially emphasizes on contract demand of every consumer and tries to reduce the use beyond the contract demand. This control unit design includes both software and hardware work and designed for 0.5 kW contract demand. The device is tested in laboratory and reveals its potential use in the field.
Resumo:
Major drawback of studying diffusion in multi-component systems is the lack of suitable techniques to estimate the diffusion parameters. In this study, a generalized treatment to determine the intrinsic diffusion coefficients in multi-component systems is developed utilizing the concept of a pseudo-binary approach. This is explained with the help of experimentally developed diffusion profiles in the Cu(Sn, Ga) and Cu(Sn, Si) solid solutions. (C) 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Resumo:
The inversion of canopy reflectance models is widely used for the retrieval of vegetation properties from remote sensing. This study evaluates the retrieval of soybean biophysical variables of leaf area index, leaf chlorophyll content, canopy chlorophyll content, and equivalent leaf water thickness from proximal reflectance data integrated broadbands corresponding to moderate resolution imaging spectroradiometer, thematic mapper, and linear imaging self scanning sensors through inversion of the canopy radiative transfer model, PROSAIL. Three different inversion approaches namely the look-up table, genetic algorithm, and artificial neural network were used and performances were evaluated. Application of the genetic algorithm for crop parameter retrieval is a new attempt among the variety of optimization problems in remote sensing which have been successfully demonstrated in the present study. Its performance was as good as that of the look-up table approach and the artificial neural network was a poor performer. The general order of estimation accuracy for para-meters irrespective of inversion approaches was leaf area index > canopy chlorophyll content > leaf chlorophyll content > equivalent leaf water thickness. Performance of inversion was comparable for broadband reflectances of all three sensors in the optical region with insignificant differences in estimation accuracy among them.
Resumo:
Probable maximum precipitation (PMP) is a theoretical concept that is widely used by hydrologists to arrive at estimates for probable maximum flood (PMF) that find use in planning, design and risk assessment of high-hazard hydrological structures such as flood control dams upstream of populated areas. The PMP represents the greatest depth of precipitation for a given duration that is meteorologically possible for a watershed or an area at a particular time of year, with no allowance made for long-term climatic trends. Various methods are in use for estimation of PMP over a target location corresponding to different durations. Moisture maximization method and Hershfield method are two widely used methods. The former method maximizes the observed storms assuming that the atmospheric moisture would rise up to a very high value estimated based on the maximum daily dew point temperature. On the other hand, the latter method is a statistical method based on a general frequency equation given by Chow. The present study provides one-day PMP estimates and PMP maps for Mahanadi river basin based on the aforementioned methods. There is a need for such estimates and maps, as the river basin is prone to frequent floods. Utility of the constructed PMP maps in computing PMP for various catchments in the river basin is demonstrated. The PMP estimates can eventually be used to arrive at PMF estimates for those catchments. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
Regional frequency analysis is widely used for estimating quantiles of hydrological extreme events at sparsely gauged/ungauged target sites in river basins. It involves identification of a region (group of watersheds) resembling watershed of the target site, and use of information pooled from the region to estimate quantile for the target site. In the analysis, watershed of the target site is assumed to completely resemble watersheds in the identified region in terms of mechanism underlying generation of extreme event. In reality, it is rare to find watersheds that completely resemble each other. Fuzzy clustering approach can account for partial resemblance of watersheds and yield region(s) for the target site. Formation of regions and quantile estimation requires discerning information from fuzzy-membership matrix obtained based on the approach. Practitioners often defuzzify the matrix to form disjoint clusters (regions) and use them as the basis for quantile estimation. The defuzzification approach (DFA) results in loss of information discerned on partial resemblance of watersheds. The lost information cannot be utilized in quantile estimation, owing to which the estimates could have significant error. To avert the loss of information, a threshold strategy (TS) was considered in some prior studies. In this study, it is analytically shown that the strategy results in under-prediction of quantiles. To address this, a mathematical approach is proposed in this study and its effectiveness in estimating flood quantiles relative to DFA and TS is demonstrated through Monte-Carlo simulation experiments and case study on Mid-Atlantic water resources region, USA. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Monte Carlo simulation methods involving splitting of Markov chains have been used in evaluation of multi-fold integrals in different application areas. We examine in this paper the performance of these methods in the context of evaluation of reliability integrals from the point of view of characterizing the sampling fluctuations. The methods discussed include the Au-Beck subset simulation, Holmes-Diaconis-Ross method, and generalized splitting algorithm. A few improvisations based on first order reliability method are suggested to select algorithmic parameters of the latter two methods. The bias and sampling variance of the alternative estimators are discussed. Also, an approximation to the sampling distribution of some of these estimators is obtained. Illustrative examples involving component and series system reliability analyses are presented with a view to bring out the relative merits of alternative methods. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
The study follows an approach to estimate phytomass using recent techniques of remote sensing and digital photogrammetry. It involved tree inventory of forest plantations in Bhakra forest range of Nainital district. Panchromatic stereo dataset of Cartosat-1 was evaluated for mean stand height retrieval. Texture analysis and tree-tops detection analyses were done on Quick-Bird PAN data. The composite texture image of mean, variance and contrast with a 5x5 pixel window was found best to separate tree crowns for assessment of crown areas. Tree tops count obtained by local maxima filtering was found to be 83.4 % efficient with an RMSE+/-13 for 35 sample plots. The predicted phytomass ranged from 27.01 to 35.08 t/ha in the case of Eucalyptus sp. while in the case of Tectona grandis from 26.52 to 156 t/ha. The correlation between observed and predicted phytomass in Eucalyptus sp. was 0.468 with an RMSE of 5.12. However, the phytomass predicted in Tectona grandis was fairly strong with R-2=0.65 and RMSE of 9.89 as there was no undergrowth and the crowns were clearly visible. Results of the study show the potential of Cartosat-1 derived DSM and Quick-Bird texture image for the estimation of stand height, stem diameter, tree count and phytomass of important timber species.
Resumo:
A new successive displacement type load flow method is developed in this paper. This algorithm differs from the conventional Y-Bus based Gauss Seidel load flow in that the voltages at each bus is updated in every iteration based on the exact solution of the power balance equation at that node instead of an approximate solution used by the Gauss Seidel method. It turns out that this modified implementation translates into only a marginal improvement in convergence behaviour for obtaining load flow solutions of interconnected systems. However it is demonstrated that the new approach can be adapted with some additional refinements in order to develop an effective load flow solution technique for radial systems. Numerical results considering a number of systems-both interconnected and radial, are provided to validate the proposed approach.
Resumo:
The problem of determination of system reliability of randomly vibrating structures arises in many application areas of engineering. We discuss in this paper approaches based on Monte Carlo simulations and laboratory testing to tackle problems of time variant system reliability estimation. The strategy we adopt is based on the application of Girsanov's transformation to the governing stochastic differential equations which enables estimation of probability of failure with significantly reduced number of samples than what is needed in a direct simulation study. Notably, we show that the ideas from Girsanov's transformation based Monte Carlo simulations can be extended to conduct laboratory testing to assess system reliability of engineering structures with reduced number of samples and hence with reduced testing times. Illustrative examples include computational studies on a 10 degree of freedom nonlinear system model and laboratory/computational investigations on road load response of an automotive system tested on a four post Lest rig. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
The main objective of the paper is to develop a new method to estimate the maximum magnitude (M (max)) considering the regional rupture character. The proposed method has been explained in detail and examined for both intraplate and active regions. Seismotectonic data has been collected for both the regions, and seismic study area (SSA) map was generated for radii of 150, 300, and 500 km. The regional rupture character was established by considering percentage fault rupture (PFR), which is the ratio of subsurface rupture length (RLD) to total fault length (TFL). PFR is used to arrive RLD and is further used for the estimation of maximum magnitude for each seismic source. Maximum magnitude for both the regions was estimated and compared with the existing methods for determining M (max) values. The proposed method gives similar M (max) value irrespective of SSA radius and seismicity. Further seismicity parameters such as magnitude of completeness (M (c) ), ``a'' and ``aEuro parts per thousand b `` parameters and maximum observed magnitude (M (max) (obs) ) were determined for each SSA and used to estimate M (max) by considering all the existing methods. It is observed from the study that existing deterministic and probabilistic M (max) estimation methods are sensitive to SSA radius, M (c) , a and b parameters and M (max) (obs) values. However, M (max) determined from the proposed method is a function of rupture character instead of the seismicity parameters. It was also observed that intraplate region has less PFR when compared to active seismic region.
Resumo:
Event-triggered sampling (ETS) is a new approach towards efficient signal analysis. The goal of ETS need not be only signal reconstruction, but also direct estimation of desired information in the signal by skillful design of event. We show a promise of ETS approach towards better analysis of oscillatory non-stationary signals modeled by a time-varying sinusoid, when compared to existing uniform Nyquist-rate sampling based signal processing. We examine samples drawn using ETS, with events as zero-crossing (ZC), level-crossing (LC), and extrema, for additive in-band noise and jitter in detection instant. We find that extrema samples are robust, and also facilitate instantaneous amplitude (IA), and instantaneous frequency (IF) estimation in a time-varying sinusoid. The estimation is proposed solely using extrema samples, and a local polynomial regression based least-squares fitting approach. The proposed approach shows improvement, for noisy signals, over widely used analytic signal, energy separation, and ZC based approaches (which are based on uniform Nyquist-rate sampling based data-acquisition and processing). Further, extrema based ETS in general gives a sub-sampled representation (relative to Nyquistrate) of a time-varying sinusoid. For the same data-set size captured with extrema based ETS, and uniform sampling, the former gives much better IA and IF estimation. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
NMR spectroscopy is a powerful means of studying liquid-crystalline systems at atomic resolutions. Of the many parameters that can provide information on the dynamics and order of the systems, H-1-C-13 dipolar couplings are an important means of obtaining such information. Depending on the details of the molecular structure and the magnitude of the order parameters, the dipolar couplings can vary over a wide range of values. Thus the method employed to estimate the dipolar couplings should be capable of estimating both large and small dipolar couplings at the same time. For this purpose, we consider here a two-dimensional NMR experiment that works similar to the insensitive nuclei enhanced by polarization transfer (INEPT) experiment in solution. With the incorporation of a modification proposed earlier for experiments with low radio frequency power, the scheme is observed to enable a wide range of dipolar couplings to be estimated at the same time. We utilized this approach to obtain dipolar couplings in a liquid crystal with phenyl rings attached to either end of the molecule, and estimated its local order parameters.
Resumo:
In India, the low prevalence of HIV-associated dementia (HAD) in the Human immunodeficiency virus type 1 (HIV-1) subtype C infection is quite paradoxical given the high-rate of macrophage infiltration into the brain. Whether the direct viral burden in individual brain compartments could be associated with the variability of the neurologic manifestations is controversial. To understand this paradox, we examined the proviral DNA load in nine different brain regions and three different peripheral tissues derived from ten human subjects at autopsy. Using a highly sensitive TaqMan probe-based real-time PCR, we determined the proviral load in multiple samples processed in parallel from each site. Unlike previously published reports, the present analysis identified uniform proviral distribution among the brain compartments examined without preferential accumulation of the DNA in any one of them. The overall viral DNA burden in the brain tissues was very low, approximately 1 viral integration per 1000 cells or less. In a subset of the tissue samples tested, the HIV DNA mostly existed in a free unintegrated form. The V3-V5 envelope sequences, demonstrated a brain-specific compartmentalization in four of the ten subjects and a phylogenetic overlap between the neural and non-neural compartments in three other subjects. The envelope sequences phylogenetically belonged to subtype C and the majority of them were R5 tropic. To the best of our knowledge, the present study represents the first analysis of the proviral burden in subtype C postmortem human brain tissues. Future studies should determine the presence of the viral antigens, the viral transcripts, and the proviral DNA, in parallel, in different brain compartments to shed more light on the significance of the viral burden on neurologic consequences of HIV infection.
Resumo:
We consider carrier frequency offset (CFO) estimation in the context of multiple-input multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) systems over noisy frequency-selective wireless channels with both single- and multiuser scenarios. We conceived a new approach for parameter estimation by discretizing the continuous-valued CFO parameter into a discrete set of bins and then invoked detection theory, analogous to the minimum-bit-error-ratio optimization framework for detecting the finite-alphabet received signal. Using this radical approach, we propose a novel CFO estimation method and study its performance using both analytical results and Monte Carlo simulations. We obtain expressions for the variance of the CFO estimation error and the resultant BER degradation with the single- user scenario. Our simulations demonstrate that the overall BER performance of a MIMO-OFDM system using the proposed method is substantially improved for all the modulation schemes considered, albeit this is achieved at increased complexity.