121 resultados para Weighted


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The weighted-least-squares method using sensitivity-analysis technique is proposed for the estimation of parameters in water-distribution systems. The parameters considered are the Hazen-Williams coefficients for the pipes. The objective function used is the sum of the weighted squares of the differences between the computed and the observed values of the variables. The weighted-least-squares method can elegantly handle multiple loading conditions with mixed types of measurements such as heads and consumptions, different sets and number of measurements for each loading condition, and modifications in the network configuration due to inclusion or exclusion of some pipes affected by valve operations in each loading condition. Uncertainty in parameter estimates can also be obtained. The method is applied for the estimation of parameters in a metropolitan urban water-distribution system in India.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the problem of automated multiagent search in an unknown environment. Autonomous agents equipped with sensors carry out a search operation in a search space, where the uncertainty, or lack of information about the environment, is known a priori as an uncertainty density distribution function. The agents are deployed in the search space to maximize single step search effectiveness. The centroidal Voronoi configuration, which achieves a locally optimal deployment, forms the basis for the proposed sequential deploy and search strategy. It is shown that with the proposed control law the agent trajectories converge in a globally asymptotic manner to the centroidal Voronoi configuration. Simulation experiments are provided to validate the strategy. Note to Practitioners-In this paper, searching an unknown region to gather information about it is modeled as a problem of using search as a means of reducing information uncertainty about the region. Moreover, multiple automated searchers or agents are used to carry out this operation optimally. This problem has many applications in search and surveillance operations using several autonomous UAVs or mobile robots. The concept of agents converging to the centroid of their Voronoi cells, weighted with the uncertainty density, is used to design a search strategy named as sequential deploy and search. Finally, the performance of the strategy is validated using simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using intensity autocorrelation of multiply scattered light, we show that the increase in interparticle interaction in dense, binary colloidal fluid mixtures of particle diameters 0.115µm and 0.089µm results in freezing into a crystalline phase at volume fraction? of 0.1 and into a glassy state at?=0.2. The functional form of the field autocorrelation functiong (1)(t) for the binary fluid phase is fitted to exp[??(6k 0 2 D eff t)1/2] wherek 0 is the magnitude of the incident light wavevector and? is a parameter inversely proportional to the photon transport mean free pathl*. TheD eff is thel* weighted average of the individual diffusion coefficients of the pure species. Thel* used in calculatingD eff was computed using the Mie theory. In the solid (crystal or glass) phase, theg (1)(t) is fitted (only with a moderate success) to exp[??(6k 0 2 W(t))1/2] where the mean-squared displacementW(t) is evaluated for a harmonically bound overdamped Brownian oscillator. It is found that the fitted parameter? for both the binary and monodisperse suspensions decreases significantly with the increase of interparticle interactions. This has been justified by showing that the calculated values ofl* in a monodisperse suspension using Mie theory increase very significantly with the interactions incorporated inl* via the static structure factor.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The weighted-least-squares method based on the Gauss-Newton minimization technique is used for parameter estimation in water distribution networks. The parameters considered are: element resistances (single and/or group resistances, Hazen-Williams coefficients, pump specifications) and consumptions (for single or multiple loading conditions). The measurements considered are: nodal pressure heads, pipe flows, head loss in pipes, and consumptions/inflows. An important feature of the study is a detailed consideration of the influence of different choice of weights on parameter estimation, for error-free data, noisy data, and noisy data which include bad data. The method is applied to three different networks including a real-life problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper considers the design and analysis of a filter at the receiver of a source coding system to mitigate the excess distortion caused due to channel errors. The index output by the source encoder is sent over a fading discrete binary symmetric channel and the possibly incorrect received index is mapped to the corresponding codeword by a Vector Quantization (VQ) decoder at the receiver. The output of the VQ decoder is then processed by a receive filter to obtain an estimate of the source instantiation. The distortion performance is analyzed for weighted mean square error (WMSE) and the optimum receive filter that minimizes the expected distortion is derived for two different cases of fading. It is shown that the performance of the system with the receive filter is strictly better than that of a conventional VQ and the difference becomes more significant as the number of bits transmitted increases. Theoretical expressions for an upper and lower bound on the WMSE performance of the system with the receive filter and a Rayleigh flat fading channel are derived. The design of a receive filter in the presence of channel mismatch is also studied and it is shown that a minimax solution is the one obtained by designing the receive filter for the worst possible channel. Simulation results are presented to validate the theoretical expressions and illustrate the benefits of receive filtering.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The maintenance of chlorine residual is needed at all the points in the distribution system supplied with chlorine as a disinfectant. The propagation and level of chlorine in a distribution system is affected by both bulk and pipe wall reactions. It is well known that the field determination of wall reaction parameter is difficult. The source strength of chlorine to maintain a specified chlorine residual at a target node is also an important parameter. The inverse model presented in the paper determines these water quality parameters, which are associated with different reaction kinetics, either in single or in groups of pipes. The weighted-least-squares method based on the Gauss-Newton minimization technique is used for the estimation of these parameters. The validation and application of the inverse model is illustrated with an example pipe distribution system under steady state. A generalized procedure to handle noisy and bad (abnormal) data is suggested, which can be used to estimate these parameters more accurately. The developed inverse model is useful for water supply agencies to calibrate their water distribution system and to improve their operational strategies to maintain water quality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have developed two reduced complexity bit-allocation algorithms for MP3/AAC based audio encoding, which can be useful at low bit-rates. One algorithm derives optimum bit-allocation using constrained optimization of weighted noise-to-mask ratio and the second algorithm uses decoupled iterations for distortion control and rate control, with convergence criteria. MUSHRA based evaluation indicated that the new algorithm would be comparable to AAC but requiring only about 1/10 th the complexity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a improved language modeling technique for Lempel-Ziv-Welch (LZW) based LID scheme. The previous approach to LID using LZW algorithm prepares the language pattern table using LZW algorithm. Because of the sequential nature of the LZW algorithm, several language specific patterns of the language were missing in the pattern table. To overcome this, we build a universal pattern table, which contains all patterns of different length. For each language it's corresponding language specific pattern table is constructed by retaining the patterns of the universal table whose frequency of appearance in the training data is above the threshold.This approach reduces the classification score (Compression Ratio [LZW-CR] or the weighted discriminant score[LZW-WDS]) for non native languages and increases the LID performance considerably.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In each stage of product development, we need to take decisions, by evaluating multiple product alternatives based on multiple criteria. Classical evaluation methods like weighted objectives method assumes certainty about information available during product development. However, designers often must evaluate under uncertainty. Often the likely performance, cost or environmental impacts of a product proposal could be estimated only with certain confidence, which may vary from one proposal to another. In such situations, the classical approaches to evaluation can give misleading results. There is a need for a method that can aid in decision making by supporting quantitative comparison of alternatives to identify the most promising alternative, under uncertain information about the alternatives. A method called confidence weighted objectives method is developed to compare the whole life cycle of product proposals using multiple evaluation criteria under various levels of uncertainty with non crisp values. It estimates the overall worth of proposal and confidence on the estimate, enabling deferment of decision making when decisions cannot be made using current information available.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Given an undirected unweighted graph G = (V, E) and an integer k ≥ 1, we consider the problem of computing the edge connectivities of all those (s, t) vertex pairs, whose edge connectivity is at most k. We present an algorithm with expected running time Õ(m + nk3) for this problem, where |V| = n and |E| = m. Our output is a weighted tree T whose nodes are the sets V1, V2,..., V l of a partition of V, with the property that the edge connectivity in G between any two vertices s ε Vi and t ε Vj, for i ≠ j, is equal to the weight of the lightest edge on the path between Vi and Vj in T. Also, two vertices s and t belong to the same Vi for any i if and only if they have an edge connectivity greater than k. Currently, the best algorithm for this problem needs to compute all-pairs min-cuts in an O(nk) edge graph; this takes Õ(m + n5/2kmin{k1/2, n1/6}) time. Our algorithm is much faster for small values of k; in fact, it is faster whenever k is o(n5/6). Our algorithm yields the useful corollary that in Õ(m + nc3) time, where c is the size of the global min-cut, we can compute the edge connectivities of all those pairs of vertices whose edge connectivity is at most αc for some constant α. We also present an Õ(m + n) Monte Carlo algorithm for the approximate version of this problem. This algorithm is applicable to weighted graphs as well. Our algorithm, with some modifications, also solves another problem called the minimum T-cut problem. Given T ⊆ V of even cardinality, we present an Õ(m + nk3) algorithm to compute a minimum cut that splits T into two odd cardinality components, where k is the size of this cut.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a new approach to spoken language modeling for language identification (LID) using the Lempel-Ziv-Welch (LZW) algorithm. The LZW technique is applicable to any kind of tokenization of the speech signal. Because of the efficiency of LZW algorithm to obtain variable length symbol strings in the training data, the LZW codebook captures the essentials of a language effectively. We develop two new deterministic measures for LID based on the LZW algorithm namely: (i) Compression ratio score (LZW-CR) and (ii) weighted discriminant score (LZW-WDS). To assess these measures, we consider error-free tokenization of speech as well as artificially induced noise in the tokenization. It is shown that for a 6 language LID task of OGI-TS database with clean tokenization, the new model (LZW-WDS) performs slightly better than the conventional bigram model. For noisy tokenization, which is the more realistic case, LZW-WDS significantly outperforms the bigram technique

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many downscaling techniques have been developed in the past few years for projection of station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs) to assess the hydrological impacts of climate change. This article compares the performances of three downscaling methods, viz. conditional random field (CRF), K-nearest neighbour (KNN) and support vector machine (SVM) methods in downscaling precipitation in the Punjab region of India, belonging to the monsoon regime. The CRF model is a recently developed method for downscaling hydrological variables in a probabilistic framework, while the SVM model is a popular machine learning tool useful in terms of its ability to generalize and capture nonlinear relationships between predictors and predictand. The KNN model is an analogue-type method that queries days similar to a given feature vector from the training data and classifies future days by random sampling from a weighted set of K closest training examples. The models are applied for downscaling monsoon (June to September) daily precipitation at six locations in Punjab. Model performances with respect to reproduction of various statistics such as dry and wet spell length distributions, daily rainfall distribution, and intersite correlations are examined. It is found that the CRF and KNN models perform slightly better than the SVM model in reproducing most daily rainfall statistics. These models are then used to project future precipitation at the six locations. Output from the Canadian global climate model (CGCM3) GCM for three scenarios, viz. A1B, A2, and B1 is used for projection of future precipitation. The projections show a change in probability density functions of daily rainfall amount and changes in the wet and dry spell distributions of daily precipitation. Copyright (C) 2011 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rate control regulates the instantaneous video bit -rate to maximize a picture quality metric while satisfying channel constraints. Typically, a quality metric such as Peak Signalto-Noise ratio (PSNR) or weighted signal -to-noise ratio(WSNR) is chosen out of convenience. However this metric is not always truly representative of perceptual video quality.Attempts to use perceptual metrics in rate control have been limited by the accuracy of the video quality metrics chosen.Recently, new and improved metrics of subjective quality such as the Video quality experts group's (VQEG) NTIA1 General Video Quality Model (VQM) have been proven to have strong correlation with subjective quality. Here, we apply the key principles of the NTIA -VQM model to rate control in order to maximize perceptual video quality. Our experiments demonstrate that applying NTIA -VQM motivated metrics to standard TMN8 rate control in an H.263 encoder results in perceivable quality improvements over a baseline TMN8 / MSE based implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper obtains a new accurate model for sensitivity in power systems and uses it in conjunction with linear programming for the solution of load-shedding problems with a minimum loss of loads. For cases where the error in the sensitivity model increases, other linear programming and quadratic programming models have been developed, assuming currents at load buses as variables and not load powers. A weighted error criterion has been used to take priority schedule into account; it can be either a linear or a quadratic function of the errors, and depending upon the function appropriate programming techniques are to be employed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The design of machine foundations are done on the basis of two principal criteria viz., vibration amplitude should be within the permissible limits and natural frequency of machine-foundation-soil system should be away from the operating frequency (i.e. avoidance of resonance condition). In this paper the nondimensional amplitude factor M-m or M-r m and the nondimensional frequency factor a(o m) at resonance are related using elastic half space theory and is used as a new approach for a simplified design procedure for the design of machine foundations for all the modes of vibration fiz. vertical, horizontal, rocking and torsional for rigid base pressure distribution and weighted average displacement condition. The analysis show that one need not know the value of Poisson's ratio for rotating mass system for all the modes of vibration.