261 resultados para polarisation estimation
Resumo:
Adapting the power of secondary users (SUs) while adhering to constraints on the interference caused to primary receivers (PRxs) is a critical issue in underlay cognitive radio (CR). This adaptation is driven by the interference and transmit power constraints imposed on the secondary transmitter (STx). Its performance also depends on the quality of channel state information (CSI) available at the STx of the links from the STx to the secondary receiver and to the PRxs. For a system in which an STx is subject to an average interference constraint or an interference outage probability constraint at each of the PRxs, we derive novel symbol error probability (SEP)-optimal, practically motivated binary transmit power control policies. As a reference, we also present the corresponding SEP-optimal continuous transmit power control policies for one PRx. We then analyze the robustness of the optimal policies when the STx knows noisy channel estimates of the links between the SU and the PRxs. Altogether, our work develops a holistic understanding of the critical role played by different transmit and interference constraints in driving power control in underlay CR and the impact of CSI on its performance.
Resumo:
This paper proposes an optical flow algorithm by adapting Approximate Nearest Neighbor Fields (ANNF) to obtain a pixel level optical flow between image sequence. Patch similarity based coherency is performed to refine the ANNF maps. Further improvement in mapping between the two images are obtained by fusing bidirectional ANNF maps between pair of images. Thus a highly accurate pixel level flow is obtained between the pair of images. Using pyramidal cost optimization, the pixel level optical flow is further optimized to a sub-pixel level. The proposed approach is evaluated on the middlebury dataset and the performance obtained is comparable with the state of the art approaches. Furthermore, the proposed approach can be used to compute large displacement optical flow as evaluated using MPI Sintel dataset.
Resumo:
Electromagnetic Articulography (EMA) technique is used to record the kinematics of different articulators while one speaks. EMA data often contains missing segments due to sensor failure. In this work, we propose a maximum a-posteriori (MAP) estimation with continuity constraint to recover the missing samples in the articulatory trajectories recorded using EMA. In this approach, we combine the benefits of statistical MAP estimation as well as the temporal continuity of the articulatory trajectories. Experiments on articulatory corpus using different missing segment durations show that the proposed continuity constraint results in a 30% reduction in average root mean squared error in estimation over statistical estimation of missing segments without any continuity constraint.
Resumo:
Materials with widely varying molecular topologies and exhibiting liquid crystalline properties have attracted considerable attention in recent years. C-13 NMR spectroscopy is a convenient method for studying such novel systems. In this approach the assignment of the spectrum is the first step which is a non-trivial problem. Towards this end, we propose here a method that enables the carbon skeleton of the different sub-units of the molecule to be traced unambiguously. The proposed method uses a heteronuclear correlation experiment to detect pairs of nearby carbons with attached protons in the liquid crystalline core through correlation of the carbon chemical shifts to the double-quantum coherences of protons generated through the dipolar coupling between them. Supplemented by experiments that identify non-protonated carbons, the method leads to a complete assignment of the spectrum. We initially apply this method for assigning the C-13 spectrum of the liquid crystal 4-n-pentyl-4'-cyanobiphenyl oriented in the magnetic field. We then utilize the method to assign the aromatic carbon signals of a thiophene based liquid crystal thereby enabling the local order-parameters of the molecule to be estimated and the mutual orientation of the different sub-units to be obtained.
Resumo:
A simple method employing an optical probe is presented to measure density variations in a hypersonic flow obstructed by a test model in a typical shock tunnel. The probe has a plane light wave trans-illuminating the flow and casting a shadow of a random dot pattern. Local slopes of the distorted wavefront are obtained from shifts of the dots in the pattern. Local shifts in the dots are accurately measured by cross-correlating local shifted shadows with the corresponding unshifted originals. The measured slopes are suitably unwrapped by using a discrete cosine transform based phase unwrapping procedure and also through iterative procedures. The unwrapped phase information is used in an iterative scheme for a full quantitative recovery of density distribution in the shock around the model through refraction tomographic inversion. Hypersonic flow field parameters around a missile shaped body at a free-stream Mach number of 5.8 measured using this technique are compared with the numerically estimated values. (C) 2014 Society of Photo-Optical Instrumentation Engineers (SPIE)
Resumo:
Storage of water within a river basin is often estimated by analyzing recession flow curves as it cannot be `instantly' estimated with the aid of available technologies. In this study we explicitly deal with the issue of estimation of `drainable' storage, which is equal to the area under the `complete' recession flow curve (i.e. a discharge vs. time curve where discharge continuously decreases till it approaches zero). But a major challenge in this regard is that recession curves are rarely `complete' due to short inter-storm time intervals. Therefore, it is essential to analyze and model recession flows meaningfully. We adopt the wellknown Brutsaert and Nieber analytical method that expresses time derivative of discharge (dQ/dt) as a power law function of Q : -dQ/dt = kQ(alpha). However, the problem with dQ/dt-Q analysis is that it is not suitable for late recession flows. Traditional studies often compute alpha considering early recession flows and assume that its value is constant for the whole recession event. But this approach gives unrealistic results when alpha >= 2, a common case. We address this issue here by using the recently proposed geomorphological recession flow model (GRFM) that exploits the dynamics of active drainage networks. According to the model, alpha is close to 2 for early recession flows and 0 for late recession flows. We then derive a simple expression for drainable storage in terms the power law coefficient k, obtained by considering early recession flows only, and basin area. Using 121 complete recession curves from 27 USGS basins we show that predicted drainable storage matches well with observed drainable storage, indicating that the model can also reliably estimate drainable storage for `incomplete' recession events to address many challenges related to water resources. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Estimation of the municipal solid waste settlements and the contribution of each of the components are essential in the estimation of the volume of the waste that can be accommodated in a landfill and increase the post-usage of the landfill. This article describes an experimental methodology for estimating and separating primary settlement, settlement owing to creep and biodegradation-induced settlement. The primary settlement and secondary settlement have been estimated and separated based on 100% pore pressure dissipation time and the coefficient of consolidation. Mechanical creep and biodegradation settlements were estimated and separated based on the observed time required for landfill gas production. The results of a series of laboratory triaxial tests, creep tests and anaerobic reactor cell setups were conducted to describe the components of settlement. All the tests were conducted on municipal solid waste (compost reject) samples. It was observed that biodegradation accounted to more than 40% of the total settlement, whereas mechanical creep contributed more than 20% towards the total settlement. The essential model parameters, such as the compression ratio (C-c'), rate of mechanical creep (c), coefficient of mechanical creep (b), rate of biodegradation (d) and the total strain owing to biodegradation (E-DG), are useful parameters in the estimation of total settlements as well as components of settlement in landfill.
Resumo:
This paper addresses the problem of intercepting highly maneuverable threats using seeker-less interceptors that operate in the command guidance mode. These systems are more prone to estimation errors than standard seeker-based systems. In this paper, an integrated estimation/guidance (IEG) algorithm, which combines interactive multiple model (IMM) estimator with differential game guidance law (DGL), is proposed for seeker-less interception. In this interception scenario, the target performs an evasive bang-bang maneuver, while the sensor has noisy measurements and the interceptor is subject to acceleration bound. The IMM serves as a basis for the synthesis of efficient filters for tracking maneuvering targets and reducing estimation errors. The proposed game-based guidance law for two-dimensional interception, later extended to three-dimensional interception scenarios, is used to improve the endgame performance of the command-guided seeker-less interceptor. The IMM scheme and an optimal selection of filters, to cater to various maneuvers that are expected during the endgame, are also described. Furthermore, a chatter removal algorithm is introduced, thus modifying the differential game guidance law (modified DGL). A comparison between modified DGL guidance law and conventional proportional navigation guidance law demonstrates significant improvement in miss distance in a pursuer-evader scenario. Simulation results are also presented for varying flight path angle errors. A numerical study is provided which demonstrates the performance of the combined interactive multiple model with game-based guidance law (IMM/DGL). Simulation study is also carried out for combined IMM and modified DGL (IMM/modified DGL) which exhibits the superior performance and viability of the algorithm reducing the chattering phenomenon. The results are illustrated by an extensive Monte Carlo simulation study in the presence of estimation errors.
Resumo:
Major drawback of studying diffusion in multi-component systems is the lack of suitable techniques to estimate the diffusion parameters. In this study, a generalized treatment to determine the intrinsic diffusion coefficients in multi-component systems is developed utilizing the concept of a pseudo-binary approach. This is explained with the help of experimentally developed diffusion profiles in the Cu(Sn, Ga) and Cu(Sn, Si) solid solutions. (C) 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Resumo:
The inversion of canopy reflectance models is widely used for the retrieval of vegetation properties from remote sensing. This study evaluates the retrieval of soybean biophysical variables of leaf area index, leaf chlorophyll content, canopy chlorophyll content, and equivalent leaf water thickness from proximal reflectance data integrated broadbands corresponding to moderate resolution imaging spectroradiometer, thematic mapper, and linear imaging self scanning sensors through inversion of the canopy radiative transfer model, PROSAIL. Three different inversion approaches namely the look-up table, genetic algorithm, and artificial neural network were used and performances were evaluated. Application of the genetic algorithm for crop parameter retrieval is a new attempt among the variety of optimization problems in remote sensing which have been successfully demonstrated in the present study. Its performance was as good as that of the look-up table approach and the artificial neural network was a poor performer. The general order of estimation accuracy for para-meters irrespective of inversion approaches was leaf area index > canopy chlorophyll content > leaf chlorophyll content > equivalent leaf water thickness. Performance of inversion was comparable for broadband reflectances of all three sensors in the optical region with insignificant differences in estimation accuracy among them.
Resumo:
Probable maximum precipitation (PMP) is a theoretical concept that is widely used by hydrologists to arrive at estimates for probable maximum flood (PMF) that find use in planning, design and risk assessment of high-hazard hydrological structures such as flood control dams upstream of populated areas. The PMP represents the greatest depth of precipitation for a given duration that is meteorologically possible for a watershed or an area at a particular time of year, with no allowance made for long-term climatic trends. Various methods are in use for estimation of PMP over a target location corresponding to different durations. Moisture maximization method and Hershfield method are two widely used methods. The former method maximizes the observed storms assuming that the atmospheric moisture would rise up to a very high value estimated based on the maximum daily dew point temperature. On the other hand, the latter method is a statistical method based on a general frequency equation given by Chow. The present study provides one-day PMP estimates and PMP maps for Mahanadi river basin based on the aforementioned methods. There is a need for such estimates and maps, as the river basin is prone to frequent floods. Utility of the constructed PMP maps in computing PMP for various catchments in the river basin is demonstrated. The PMP estimates can eventually be used to arrive at PMF estimates for those catchments. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
Regional frequency analysis is widely used for estimating quantiles of hydrological extreme events at sparsely gauged/ungauged target sites in river basins. It involves identification of a region (group of watersheds) resembling watershed of the target site, and use of information pooled from the region to estimate quantile for the target site. In the analysis, watershed of the target site is assumed to completely resemble watersheds in the identified region in terms of mechanism underlying generation of extreme event. In reality, it is rare to find watersheds that completely resemble each other. Fuzzy clustering approach can account for partial resemblance of watersheds and yield region(s) for the target site. Formation of regions and quantile estimation requires discerning information from fuzzy-membership matrix obtained based on the approach. Practitioners often defuzzify the matrix to form disjoint clusters (regions) and use them as the basis for quantile estimation. The defuzzification approach (DFA) results in loss of information discerned on partial resemblance of watersheds. The lost information cannot be utilized in quantile estimation, owing to which the estimates could have significant error. To avert the loss of information, a threshold strategy (TS) was considered in some prior studies. In this study, it is analytically shown that the strategy results in under-prediction of quantiles. To address this, a mathematical approach is proposed in this study and its effectiveness in estimating flood quantiles relative to DFA and TS is demonstrated through Monte-Carlo simulation experiments and case study on Mid-Atlantic water resources region, USA. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Monte Carlo simulation methods involving splitting of Markov chains have been used in evaluation of multi-fold integrals in different application areas. We examine in this paper the performance of these methods in the context of evaluation of reliability integrals from the point of view of characterizing the sampling fluctuations. The methods discussed include the Au-Beck subset simulation, Holmes-Diaconis-Ross method, and generalized splitting algorithm. A few improvisations based on first order reliability method are suggested to select algorithmic parameters of the latter two methods. The bias and sampling variance of the alternative estimators are discussed. Also, an approximation to the sampling distribution of some of these estimators is obtained. Illustrative examples involving component and series system reliability analyses are presented with a view to bring out the relative merits of alternative methods. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
The study follows an approach to estimate phytomass using recent techniques of remote sensing and digital photogrammetry. It involved tree inventory of forest plantations in Bhakra forest range of Nainital district. Panchromatic stereo dataset of Cartosat-1 was evaluated for mean stand height retrieval. Texture analysis and tree-tops detection analyses were done on Quick-Bird PAN data. The composite texture image of mean, variance and contrast with a 5x5 pixel window was found best to separate tree crowns for assessment of crown areas. Tree tops count obtained by local maxima filtering was found to be 83.4 % efficient with an RMSE+/-13 for 35 sample plots. The predicted phytomass ranged from 27.01 to 35.08 t/ha in the case of Eucalyptus sp. while in the case of Tectona grandis from 26.52 to 156 t/ha. The correlation between observed and predicted phytomass in Eucalyptus sp. was 0.468 with an RMSE of 5.12. However, the phytomass predicted in Tectona grandis was fairly strong with R-2=0.65 and RMSE of 9.89 as there was no undergrowth and the crowns were clearly visible. Results of the study show the potential of Cartosat-1 derived DSM and Quick-Bird texture image for the estimation of stand height, stem diameter, tree count and phytomass of important timber species.
Resumo:
The main objective of the paper is to develop a new method to estimate the maximum magnitude (M (max)) considering the regional rupture character. The proposed method has been explained in detail and examined for both intraplate and active regions. Seismotectonic data has been collected for both the regions, and seismic study area (SSA) map was generated for radii of 150, 300, and 500 km. The regional rupture character was established by considering percentage fault rupture (PFR), which is the ratio of subsurface rupture length (RLD) to total fault length (TFL). PFR is used to arrive RLD and is further used for the estimation of maximum magnitude for each seismic source. Maximum magnitude for both the regions was estimated and compared with the existing methods for determining M (max) values. The proposed method gives similar M (max) value irrespective of SSA radius and seismicity. Further seismicity parameters such as magnitude of completeness (M (c) ), ``a'' and ``aEuro parts per thousand b `` parameters and maximum observed magnitude (M (max) (obs) ) were determined for each SSA and used to estimate M (max) by considering all the existing methods. It is observed from the study that existing deterministic and probabilistic M (max) estimation methods are sensitive to SSA radius, M (c) , a and b parameters and M (max) (obs) values. However, M (max) determined from the proposed method is a function of rupture character instead of the seismicity parameters. It was also observed that intraplate region has less PFR when compared to active seismic region.