893 resultados para unknown-input estimation
Resumo:
Estimation of the municipal solid waste settlements and the contribution of each of the components are essential in the estimation of the volume of the waste that can be accommodated in a landfill and increase the post-usage of the landfill. This article describes an experimental methodology for estimating and separating primary settlement, settlement owing to creep and biodegradation-induced settlement. The primary settlement and secondary settlement have been estimated and separated based on 100% pore pressure dissipation time and the coefficient of consolidation. Mechanical creep and biodegradation settlements were estimated and separated based on the observed time required for landfill gas production. The results of a series of laboratory triaxial tests, creep tests and anaerobic reactor cell setups were conducted to describe the components of settlement. All the tests were conducted on municipal solid waste (compost reject) samples. It was observed that biodegradation accounted to more than 40% of the total settlement, whereas mechanical creep contributed more than 20% towards the total settlement. The essential model parameters, such as the compression ratio (C-c'), rate of mechanical creep (c), coefficient of mechanical creep (b), rate of biodegradation (d) and the total strain owing to biodegradation (E-DG), are useful parameters in the estimation of total settlements as well as components of settlement in landfill.
Resumo:
This paper addresses the problem of intercepting highly maneuverable threats using seeker-less interceptors that operate in the command guidance mode. These systems are more prone to estimation errors than standard seeker-based systems. In this paper, an integrated estimation/guidance (IEG) algorithm, which combines interactive multiple model (IMM) estimator with differential game guidance law (DGL), is proposed for seeker-less interception. In this interception scenario, the target performs an evasive bang-bang maneuver, while the sensor has noisy measurements and the interceptor is subject to acceleration bound. The IMM serves as a basis for the synthesis of efficient filters for tracking maneuvering targets and reducing estimation errors. The proposed game-based guidance law for two-dimensional interception, later extended to three-dimensional interception scenarios, is used to improve the endgame performance of the command-guided seeker-less interceptor. The IMM scheme and an optimal selection of filters, to cater to various maneuvers that are expected during the endgame, are also described. Furthermore, a chatter removal algorithm is introduced, thus modifying the differential game guidance law (modified DGL). A comparison between modified DGL guidance law and conventional proportional navigation guidance law demonstrates significant improvement in miss distance in a pursuer-evader scenario. Simulation results are also presented for varying flight path angle errors. A numerical study is provided which demonstrates the performance of the combined interactive multiple model with game-based guidance law (IMM/DGL). Simulation study is also carried out for combined IMM and modified DGL (IMM/modified DGL) which exhibits the superior performance and viability of the algorithm reducing the chattering phenomenon. The results are illustrated by an extensive Monte Carlo simulation study in the presence of estimation errors.
Resumo:
In this paper, we present novel precoding methods for multiuser Rayleigh fading multiple-input-multiple-output (MIMO) systems when channel state information (CSI) is available at the transmitter (CSIT) but not at the receiver (CSIR). Such a scenario is relevant, for example, in time-division duplex (TDD) MIMO communications, where, due to channel reciprocity, CSIT can be directly acquired by sending a training sequence from the receiver to the transmitter(s). We propose three transmit precoding schemes that convert the fading MIMO channel into a fixed-gain additive white Gaussian noise (AWGN) channel while satisfying an average power constraint. We also extend one of the precoding schemes to the multiuser Rayleigh fading multiple-access channel (MAC), broadcast channel (BC), and interference channel (IC). The proposed schemes convert the fading MIMO channel into fixed-gain parallel AWGN channels in all three cases. Hence, they achieve an infinite diversity order, which is in sharp contrast to schemes based on perfect CSIR and no CSIT, which, at best, achieve a finite diversity order. Further, we show that a polynomial diversity order is retained, even in the presence of channel estimation errors at the transmitter. Monte Carlo simulations illustrate the bit error rate (BER) performance obtainable from the proposed precoding scheme compared with existing transmit precoding schemes.
Resumo:
Major drawback of studying diffusion in multi-component systems is the lack of suitable techniques to estimate the diffusion parameters. In this study, a generalized treatment to determine the intrinsic diffusion coefficients in multi-component systems is developed utilizing the concept of a pseudo-binary approach. This is explained with the help of experimentally developed diffusion profiles in the Cu(Sn, Ga) and Cu(Sn, Si) solid solutions. (C) 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Resumo:
The inversion of canopy reflectance models is widely used for the retrieval of vegetation properties from remote sensing. This study evaluates the retrieval of soybean biophysical variables of leaf area index, leaf chlorophyll content, canopy chlorophyll content, and equivalent leaf water thickness from proximal reflectance data integrated broadbands corresponding to moderate resolution imaging spectroradiometer, thematic mapper, and linear imaging self scanning sensors through inversion of the canopy radiative transfer model, PROSAIL. Three different inversion approaches namely the look-up table, genetic algorithm, and artificial neural network were used and performances were evaluated. Application of the genetic algorithm for crop parameter retrieval is a new attempt among the variety of optimization problems in remote sensing which have been successfully demonstrated in the present study. Its performance was as good as that of the look-up table approach and the artificial neural network was a poor performer. The general order of estimation accuracy for para-meters irrespective of inversion approaches was leaf area index > canopy chlorophyll content > leaf chlorophyll content > equivalent leaf water thickness. Performance of inversion was comparable for broadband reflectances of all three sensors in the optical region with insignificant differences in estimation accuracy among them.
Resumo:
Probable maximum precipitation (PMP) is a theoretical concept that is widely used by hydrologists to arrive at estimates for probable maximum flood (PMF) that find use in planning, design and risk assessment of high-hazard hydrological structures such as flood control dams upstream of populated areas. The PMP represents the greatest depth of precipitation for a given duration that is meteorologically possible for a watershed or an area at a particular time of year, with no allowance made for long-term climatic trends. Various methods are in use for estimation of PMP over a target location corresponding to different durations. Moisture maximization method and Hershfield method are two widely used methods. The former method maximizes the observed storms assuming that the atmospheric moisture would rise up to a very high value estimated based on the maximum daily dew point temperature. On the other hand, the latter method is a statistical method based on a general frequency equation given by Chow. The present study provides one-day PMP estimates and PMP maps for Mahanadi river basin based on the aforementioned methods. There is a need for such estimates and maps, as the river basin is prone to frequent floods. Utility of the constructed PMP maps in computing PMP for various catchments in the river basin is demonstrated. The PMP estimates can eventually be used to arrive at PMF estimates for those catchments. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
Regional frequency analysis is widely used for estimating quantiles of hydrological extreme events at sparsely gauged/ungauged target sites in river basins. It involves identification of a region (group of watersheds) resembling watershed of the target site, and use of information pooled from the region to estimate quantile for the target site. In the analysis, watershed of the target site is assumed to completely resemble watersheds in the identified region in terms of mechanism underlying generation of extreme event. In reality, it is rare to find watersheds that completely resemble each other. Fuzzy clustering approach can account for partial resemblance of watersheds and yield region(s) for the target site. Formation of regions and quantile estimation requires discerning information from fuzzy-membership matrix obtained based on the approach. Practitioners often defuzzify the matrix to form disjoint clusters (regions) and use them as the basis for quantile estimation. The defuzzification approach (DFA) results in loss of information discerned on partial resemblance of watersheds. The lost information cannot be utilized in quantile estimation, owing to which the estimates could have significant error. To avert the loss of information, a threshold strategy (TS) was considered in some prior studies. In this study, it is analytically shown that the strategy results in under-prediction of quantiles. To address this, a mathematical approach is proposed in this study and its effectiveness in estimating flood quantiles relative to DFA and TS is demonstrated through Monte-Carlo simulation experiments and case study on Mid-Atlantic water resources region, USA. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Monte Carlo simulation methods involving splitting of Markov chains have been used in evaluation of multi-fold integrals in different application areas. We examine in this paper the performance of these methods in the context of evaluation of reliability integrals from the point of view of characterizing the sampling fluctuations. The methods discussed include the Au-Beck subset simulation, Holmes-Diaconis-Ross method, and generalized splitting algorithm. A few improvisations based on first order reliability method are suggested to select algorithmic parameters of the latter two methods. The bias and sampling variance of the alternative estimators are discussed. Also, an approximation to the sampling distribution of some of these estimators is obtained. Illustrative examples involving component and series system reliability analyses are presented with a view to bring out the relative merits of alternative methods. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
The study follows an approach to estimate phytomass using recent techniques of remote sensing and digital photogrammetry. It involved tree inventory of forest plantations in Bhakra forest range of Nainital district. Panchromatic stereo dataset of Cartosat-1 was evaluated for mean stand height retrieval. Texture analysis and tree-tops detection analyses were done on Quick-Bird PAN data. The composite texture image of mean, variance and contrast with a 5x5 pixel window was found best to separate tree crowns for assessment of crown areas. Tree tops count obtained by local maxima filtering was found to be 83.4 % efficient with an RMSE+/-13 for 35 sample plots. The predicted phytomass ranged from 27.01 to 35.08 t/ha in the case of Eucalyptus sp. while in the case of Tectona grandis from 26.52 to 156 t/ha. The correlation between observed and predicted phytomass in Eucalyptus sp. was 0.468 with an RMSE of 5.12. However, the phytomass predicted in Tectona grandis was fairly strong with R-2=0.65 and RMSE of 9.89 as there was no undergrowth and the crowns were clearly visible. Results of the study show the potential of Cartosat-1 derived DSM and Quick-Bird texture image for the estimation of stand height, stem diameter, tree count and phytomass of important timber species.
Resumo:
The main objective of the paper is to develop a new method to estimate the maximum magnitude (M (max)) considering the regional rupture character. The proposed method has been explained in detail and examined for both intraplate and active regions. Seismotectonic data has been collected for both the regions, and seismic study area (SSA) map was generated for radii of 150, 300, and 500 km. The regional rupture character was established by considering percentage fault rupture (PFR), which is the ratio of subsurface rupture length (RLD) to total fault length (TFL). PFR is used to arrive RLD and is further used for the estimation of maximum magnitude for each seismic source. Maximum magnitude for both the regions was estimated and compared with the existing methods for determining M (max) values. The proposed method gives similar M (max) value irrespective of SSA radius and seismicity. Further seismicity parameters such as magnitude of completeness (M (c) ), ``a'' and ``aEuro parts per thousand b `` parameters and maximum observed magnitude (M (max) (obs) ) were determined for each SSA and used to estimate M (max) by considering all the existing methods. It is observed from the study that existing deterministic and probabilistic M (max) estimation methods are sensitive to SSA radius, M (c) , a and b parameters and M (max) (obs) values. However, M (max) determined from the proposed method is a function of rupture character instead of the seismicity parameters. It was also observed that intraplate region has less PFR when compared to active seismic region.
Resumo:
Event-triggered sampling (ETS) is a new approach towards efficient signal analysis. The goal of ETS need not be only signal reconstruction, but also direct estimation of desired information in the signal by skillful design of event. We show a promise of ETS approach towards better analysis of oscillatory non-stationary signals modeled by a time-varying sinusoid, when compared to existing uniform Nyquist-rate sampling based signal processing. We examine samples drawn using ETS, with events as zero-crossing (ZC), level-crossing (LC), and extrema, for additive in-band noise and jitter in detection instant. We find that extrema samples are robust, and also facilitate instantaneous amplitude (IA), and instantaneous frequency (IF) estimation in a time-varying sinusoid. The estimation is proposed solely using extrema samples, and a local polynomial regression based least-squares fitting approach. The proposed approach shows improvement, for noisy signals, over widely used analytic signal, energy separation, and ZC based approaches (which are based on uniform Nyquist-rate sampling based data-acquisition and processing). Further, extrema based ETS in general gives a sub-sampled representation (relative to Nyquistrate) of a time-varying sinusoid. For the same data-set size captured with extrema based ETS, and uniform sampling, the former gives much better IA and IF estimation. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
NMR spectroscopy is a powerful means of studying liquid-crystalline systems at atomic resolutions. Of the many parameters that can provide information on the dynamics and order of the systems, H-1-C-13 dipolar couplings are an important means of obtaining such information. Depending on the details of the molecular structure and the magnitude of the order parameters, the dipolar couplings can vary over a wide range of values. Thus the method employed to estimate the dipolar couplings should be capable of estimating both large and small dipolar couplings at the same time. For this purpose, we consider here a two-dimensional NMR experiment that works similar to the insensitive nuclei enhanced by polarization transfer (INEPT) experiment in solution. With the incorporation of a modification proposed earlier for experiments with low radio frequency power, the scheme is observed to enable a wide range of dipolar couplings to be estimated at the same time. We utilized this approach to obtain dipolar couplings in a liquid crystal with phenyl rings attached to either end of the molecule, and estimated its local order parameters.
Resumo:
Background: In the post-genomic era where sequences are being determined at a rapid rate, we are highly reliant on computational methods for their tentative biochemical characterization. The Pfam database currently contains 3,786 families corresponding to ``Domains of Unknown Function'' (DUF) or ``Uncharacterized Protein Family'' (UPF), of which 3,087 families have no reported three-dimensional structure, constituting almost one-fourth of the known protein families in search for both structure and function. Results: We applied a `computational structural genomics' approach using five state-of-the-art remote similarity detection methods to detect the relationship between uncharacterized DUFs and domain families of known structures. The association with a structural domain family could serve as a start point in elucidating the function of a DUF. Amongst these five methods, searches in SCOP-NrichD database have been applied for the first time. Predictions were classified into high, medium and low-confidence based on the consensus of results from various approaches and also annotated with enzyme and Gene ontology terms. 614 uncharacterized DUFs could be associated with a known structural domain, of which high confidence predictions, involving at least four methods, were made for 54 families. These structure-function relationships for the 614 DUF families can be accessed on-line at http://proline.biochem.iisc.ernet.in/RHD_DUFS/. For potential enzymes in this set, we assessed their compatibility with the associated fold and performed detailed structural and functional annotation by examining alignments and extent of conservation of functional residues. Detailed discussion is provided for interesting assignments for DUF3050, DUF1636, DUF1572, DUF2092 and DUF659. Conclusions: This study provides insights into the structure and potential function for nearly 20 % of the DUFs. Use of different computational approaches enables us to reliably recognize distant relationships, especially when they converge to a common assignment because the methods are often complementary. We observe that while pointers to the structural domain can offer the right clues to the function of a protein, recognition of its precise functional role is still `non-trivial' with many DUF domains conserving only some of the critical residues. It is not clear whether these are functional vestiges or instances involving alternate substrates and interacting partners. Reviewers: This article was reviewed by Drs Eugene Koonin, Frank Eisenhaber and Srikrishna Subramanian.
Resumo:
Molecular mechanics based finite element analysis is adopted in the current work to evaluate the mechanical properties of Zigzag, Armchair and Chiral Single wall Carbon Nanotubes (SWCNT) of different diameters and chiralities. Three different types of atomic bonds, that is Carbon Carbon covalent bond and two types of Carbon Carbon van der Waals bonds are considered in the carbon nanotube system. The stiffness values of these bonds are calculated using the molecular potentials, namely Morse potential function and Lennard-Jones interaction potential function respectively and these stiffness's are assigned to spring elements in the finite element model of the CNT. The geometry of CNT is built using a macro that is developed for the finite element analysis software. The finite element model of the CNT is constructed, appropriate boundary conditions are applied and the behavior of mechanical properties of CNT is studied.
Resumo:
This paper presents a comprehensive and robust strategy for the estimation of battery model parameters from noise corrupted data. The deficiencies of the existing methods for parameter estimation are studied and the proposed parameter estimation strategy improves on earlier methods by working optimally for low as well as high discharge currents, providing accurate estimates even under high levels of noise, and with a wide range of initial values. Testing on different data sets confirms the performance of the proposed parameter estimation strategy.