164 resultados para Probabilistic methods
Resumo:
In this paper, we present novel precoding methods for multiuser Rayleigh fading multiple-input-multiple-output (MIMO) systems when channel state information (CSI) is available at the transmitter (CSIT) but not at the receiver (CSIR). Such a scenario is relevant, for example, in time-division duplex (TDD) MIMO communications, where, due to channel reciprocity, CSIT can be directly acquired by sending a training sequence from the receiver to the transmitter(s). We propose three transmit precoding schemes that convert the fading MIMO channel into a fixed-gain additive white Gaussian noise (AWGN) channel while satisfying an average power constraint. We also extend one of the precoding schemes to the multiuser Rayleigh fading multiple-access channel (MAC), broadcast channel (BC), and interference channel (IC). The proposed schemes convert the fading MIMO channel into fixed-gain parallel AWGN channels in all three cases. Hence, they achieve an infinite diversity order, which is in sharp contrast to schemes based on perfect CSIR and no CSIT, which, at best, achieve a finite diversity order. Further, we show that a polynomial diversity order is retained, even in the presence of channel estimation errors at the transmitter. Monte Carlo simulations illustrate the bit error rate (BER) performance obtainable from the proposed precoding scheme compared with existing transmit precoding schemes.
Resumo:
Precise information on streamflows is of major importance for planning and monitoring of water resources schemes related to hydro power, water supply, irrigation, flood control, and for maintaining ecosystem. Engineers encounter challenges when streamflow data are either unavailable or inadequate at target locations. To address these challenges, there have been efforts to develop methodologies that facilitate prediction of streamflow at ungauged sites. Conventionally, time intensive and data exhaustive rainfall-runoff models are used to arrive at streamflow at ungauged sites. Most recent studies show improved methods based on regionalization using Flow Duration Curves (FDCs). A FDC is a graphical representation of streamflow variability, which is a plot between streamflow values and their corresponding exceedance probabilities that are determined using a plotting position formula. It provides information on the percentage of time any specified magnitude of streamflow is equaled or exceeded. The present study assesses the effectiveness of two methods to predict streamflow at ungauged sites by application to catchments in Mahanadi river basin, India. The methods considered are (i) Regional flow duration curve method, and (ii) Area Ratio method. The first method involves (a) the development of regression relationships between percentile flows and attributes of catchments in the study area, (b) use of the relationships to construct regional FDC for the ungauged site, and (c) use of a spatial interpolation technique to decode information in FDC to construct streamflow time series for the ungauged site. Area ratio method is conventionally used to transfer streamflow related information from gauged sites to ungauged sites. Attributes that have been considered for the analysis include variables representing hydrology, climatology, topography, land-use/land- cover and soil properties corresponding to catchments in the study area. Effectiveness of the presented methods is assessed using jack knife cross-validation. Conclusions based on the study are presented and discussed. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
In this paper, attempt is made to solve a few problems using the Polynomial Point Collocation Method (PPCM), the Radial Point Collocation Method (RPCM), Smoothed Particle Hydrodynamics (SPH), and the Finite Point Method (FPM). A few observations on the accuracy of these methods are recorded. All the simulations in this paper are three dimensional linear elastostatic simulations, without accounting for body forces.
Resumo:
We consider the problem of optimizing the workforce of a service system. Adapting the staffing levels in such systems is non-trivial due to large variations in workload and the large number of system parameters do not allow for a brute force search. Further, because these parameters change on a weekly basis, the optimization should not take longer than a few hours. Our aim is to find the optimum staffing levels from a discrete high-dimensional parameter set, that minimizes the long run average of the single-stage cost function, while adhering to the constraints relating to queue stability and service-level agreement (SLA) compliance. The single-stage cost function balances the conflicting objectives of utilizing workers better and attaining the target SLAs. We formulate this problem as a constrained parameterized Markov cost process parameterized by the (discrete) staffing levels. We propose novel simultaneous perturbation stochastic approximation (SPSA)-based algorithms for solving the above problem. The algorithms include both first-order as well as second-order methods and incorporate SPSA-based gradient/Hessian estimates for primal descent, while performing dual ascent for the Lagrange multipliers. Both algorithms are online and update the staffing levels in an incremental fashion. Further, they involve a certain generalized smooth projection operator, which is essential to project the continuous-valued worker parameter tuned by our algorithms onto the discrete set. The smoothness is necessary to ensure that the underlying transition dynamics of the constrained Markov cost process is itself smooth (as a function of the continuous-valued parameter): a critical requirement to prove the convergence of both algorithms. We validate our algorithms via performance simulations based on data from five real-life service systems. For the sake of comparison, we also implement a scatter search based algorithm using state-of-the-art optimization tool-kit OptQuest. From the experiments, we observe that both our algorithms converge empirically and consistently outperform OptQuest in most of the settings considered. This finding coupled with the computational advantage of our algorithms make them amenable for adaptive labor staffing in real-life service systems.
Resumo:
This paper is a study of Multilevel Sinusoidal Pulse Width Modulation (MSPWM) methods; Phase Disposition (PD), Alternate Phase Opposition Disposition (APOD), Phase Opposition Disposition (POD) on a single phase Cascaded H-Bridge Multilevel inverter. Various factors such as amplitude modulation index (Ma), frequency modulation index (M-f), phase angle between carrier and reference modulating wave (phi) have been considered for simulation. Variation in these factors and their effect on inverter performance is evaluated. Factors such as DC bus utilization, output r.m.s voltage, total harmonic distortion (%THD), dominant harmonic order, switching losses are evaluated based on simulation results.
Resumo:
The main objective of the paper is to develop a new method to estimate the maximum magnitude (M (max)) considering the regional rupture character. The proposed method has been explained in detail and examined for both intraplate and active regions. Seismotectonic data has been collected for both the regions, and seismic study area (SSA) map was generated for radii of 150, 300, and 500 km. The regional rupture character was established by considering percentage fault rupture (PFR), which is the ratio of subsurface rupture length (RLD) to total fault length (TFL). PFR is used to arrive RLD and is further used for the estimation of maximum magnitude for each seismic source. Maximum magnitude for both the regions was estimated and compared with the existing methods for determining M (max) values. The proposed method gives similar M (max) value irrespective of SSA radius and seismicity. Further seismicity parameters such as magnitude of completeness (M (c) ), ``a'' and ``aEuro parts per thousand b `` parameters and maximum observed magnitude (M (max) (obs) ) were determined for each SSA and used to estimate M (max) by considering all the existing methods. It is observed from the study that existing deterministic and probabilistic M (max) estimation methods are sensitive to SSA radius, M (c) , a and b parameters and M (max) (obs) values. However, M (max) determined from the proposed method is a function of rupture character instead of the seismicity parameters. It was also observed that intraplate region has less PFR when compared to active seismic region.
Resumo:
We revisit the a posteriori error analysis of discontinuous Galerkin methods for the obstacle problem derived in 25]. Under a mild assumption on the trace of obstacle, we derive a reliable a posteriori error estimator which does not involve min/max functions. A key in this approach is an auxiliary problem with discrete obstacle. Applications to various discontinuous Galerkin finite element methods are presented. Numerical experiments show that the new estimator obtained in this article performs better.
Resumo:
Background: In the post-genomic era where sequences are being determined at a rapid rate, we are highly reliant on computational methods for their tentative biochemical characterization. The Pfam database currently contains 3,786 families corresponding to ``Domains of Unknown Function'' (DUF) or ``Uncharacterized Protein Family'' (UPF), of which 3,087 families have no reported three-dimensional structure, constituting almost one-fourth of the known protein families in search for both structure and function. Results: We applied a `computational structural genomics' approach using five state-of-the-art remote similarity detection methods to detect the relationship between uncharacterized DUFs and domain families of known structures. The association with a structural domain family could serve as a start point in elucidating the function of a DUF. Amongst these five methods, searches in SCOP-NrichD database have been applied for the first time. Predictions were classified into high, medium and low-confidence based on the consensus of results from various approaches and also annotated with enzyme and Gene ontology terms. 614 uncharacterized DUFs could be associated with a known structural domain, of which high confidence predictions, involving at least four methods, were made for 54 families. These structure-function relationships for the 614 DUF families can be accessed on-line at http://proline.biochem.iisc.ernet.in/RHD_DUFS/. For potential enzymes in this set, we assessed their compatibility with the associated fold and performed detailed structural and functional annotation by examining alignments and extent of conservation of functional residues. Detailed discussion is provided for interesting assignments for DUF3050, DUF1636, DUF1572, DUF2092 and DUF659. Conclusions: This study provides insights into the structure and potential function for nearly 20 % of the DUFs. Use of different computational approaches enables us to reliably recognize distant relationships, especially when they converge to a common assignment because the methods are often complementary. We observe that while pointers to the structural domain can offer the right clues to the function of a protein, recognition of its precise functional role is still `non-trivial' with many DUF domains conserving only some of the critical residues. It is not clear whether these are functional vestiges or instances involving alternate substrates and interacting partners. Reviewers: This article was reviewed by Drs Eugene Koonin, Frank Eisenhaber and Srikrishna Subramanian.
Resumo:
The study introduces two new alternatives for global response sensitivity analysis based on the application of the L-2-norm and Hellinger's metric for measuring distance between two probabilistic models. Both the procedures are shown to be capable of treating dependent non-Gaussian random variable models for the input variables. The sensitivity indices obtained based on the L2-norm involve second order moments of the response, and, when applied for the case of independent and identically distributed sequence of input random variables, it is shown to be related to the classical Sobol's response sensitivity indices. The analysis based on Hellinger's metric addresses variability across entire range or segments of the response probability density function. The measure is shown to be conceptually a more satisfying alternative to the Kullback-Leibler divergence based analysis which has been reported in the existing literature. Other issues addressed in the study cover Monte Carlo simulation based methods for computing the sensitivity indices and sensitivity analysis with respect to grouped variables. Illustrative examples consist of studies on global sensitivity analysis of natural frequencies of a random multi-degree of freedom system, response of a nonlinear frame, and safety margin associated with a nonlinear performance function. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
This paper proposes a probabilistic prediction based approach for providing Quality of Service (QoS) to delay sensitive traffic for Internet of Things (IoT). A joint packet scheduling and dynamic bandwidth allocation scheme is proposed to provide service differentiation and preferential treatment to delay sensitive traffic. The scheduler focuses on reducing the waiting time of high priority delay sensitive services in the queue and simultaneously keeping the waiting time of other services within tolerable limits. The scheme uses the difference in probability of average queue length of high priority packets at previous cycle and current cycle to determine the probability of average weight required in the current cycle. This offers optimized bandwidth allocation to all the services by avoiding distribution of excess resources for high priority services and yet guaranteeing the services for it. The performance of the algorithm is investigated using MPEG-4 traffic traces under different system loading. The results show the improved performance with respect to waiting time for scheduling high priority packets and simultaneously keeping tolerable limits for waiting time and packet loss for other services. Crown Copyright (C) 2015 Published by Elsevier B.V.
Resumo:
A reliable and efficient a posteriori error estimator is derived for a class of discontinuous Galerkin (DG) methods for the Signorini problem. A common property shared by many DG methods leads to a unified error analysis with the help of a constraint preserving enriching map. The error estimator of DG methods is comparable with the error estimator of the conforming methods. Numerical experiments illustrate the performance of the error estimator. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
In this article, an abstract framework for the error analysis of discontinuous Galerkin methods for control constrained optimal control problems is developed. The analysis establishes the best approximation result from a priori analysis point of view and delivers a reliable and efficient a posteriori error estimator. The results are applicable to a variety of problems just under the minimal regularity possessed by the well-posedness of the problem. Subsequently, the applications of C-0 interior penalty methods for a boundary control problem as well as a distributed control problem governed by the biharmonic equation subject to simply supported boundary conditions are discussed through the abstract analysis. Numerical experiments illustrate the theoretical findings.
Resumo:
Northeast India and its adjoining areas are characterized by very high seismic activity. According to the Indian seismic code, the region falls under seismic zone V, which represents the highest seismic-hazard level in the country. This region has experienced a number of great earthquakes, such as the Assam (1950) and Shillong (1897) earthquakes, that caused huge devastation in the entire northeast and adjacent areas by flooding, landslides, liquefaction, and damage to roads and buildings. In this study, an attempt has been made to find the probability of occurrence of a major earthquake (M-w > 6) in this region using an updated earthquake catalog collected from different sources. Thereafter, dividing the catalog into six different seismic regions based on different tectonic features and seismogenic factors, the probability of occurrences was estimated using three models: the lognormal, Weibull, and gamma distributions. We calculated the logarithmic probability of the likelihood function (ln L) for all six regions and the entire northeast for all three stochastic models. A higher value of ln L suggests a better model, and a lower value shows a worse model. The results show different model suits for different seismic zones, but the majority follows lognormal, which is better for forecasting magnitude size. According to the results, Weibull shows the highest conditional probabilities among the three models for small as well as large elapsed time T and time intervals t, whereas the lognormal model shows the lowest and the gamma model shows intermediate probabilities. Only for elapsed time T = 0, the lognormal model shows the highest conditional probabilities among the three models at a smaller time interval (t = 3-15 yrs). The opposite result is observed at larger time intervals (t = 15-25 yrs), which show the highest probabilities for the Weibull model. However, based on this study, the IndoBurma Range and Eastern Himalaya show a high probability of occurrence in the 5 yr period 2012-2017 with >90% probability.