94 resultados para Expectation Maximization
Resumo:
We study the performance of greedy scheduling in multihop wireless networks where the objective is aggregate utility maximization. Following standard approaches, we consider the dual of the original optimization problem. Optimal scheduling requires selecting independent sets of maximum aggregate price, but this problem is known to be NP-hard. We propose and evaluate a simple greedy heuristic. Analytical bounds on performance are provided and simulations indicate that the greedy heuristic performs well in practice.
Resumo:
Fuzzy Waste Load Allocation Model (FWLAM), developed in an earlier study, derives the optimal fractional levels, for the base flow conditions, considering the goals of the Pollution Control Agency (PCA) and dischargers. The Modified Fuzzy Waste Load Allocation Model (MFWLAM) developed subsequently is a stochastic model and considers the moments (mean, variance and skewness) of water quality indicators, incorporating uncertainty due to randomness of input variables along with uncertainty due to imprecision. The risk of low water quality is reduced significantly by using this modified model, but inclusion of new constraints leads to a low value of acceptability level, A, interpreted as the maximized minimum satisfaction in the system. To improve this value, a new model, which is a combination Of FWLAM and MFWLAM, is presented, allowing for some violations in the constraints of MFWLAM. This combined model is a multiobjective optimization model having the objectives, maximization of acceptability level and minimization of violation of constraints. Fuzzy multiobjective programming, goal programming and fuzzy goal programming are used to find the solutions. For the optimization model, Probabilistic Global Search Lausanne (PGSL) is used as a nonlinear optimization tool. The methodology is applied to a case study of the Tunga-Bhadra river system in south India. The model results in a compromised solution of a higher value of acceptability level as compared to MFWLAM, with a satisfactory value of risk. Thus the goal of risk minimization is achieved with a comparatively better value of acceptability level.
Resumo:
In this paper we have proposed and implemented a joint Medium Access Control (MAC) -cum- Routing scheme for environment data gathering sensor networks. The design principle uses node 'battery lifetime' maximization to be traded against a network that is capable of tolerating: A known percentage of combined packet losses due to packet collisions, network synchronization mismatch and channel impairments Significant end-to-end delay of an order of few seconds We have achieved this with a loosely synchronized network of sensor nodes that implement Slotted-Aloha MAC state machine together with route information. The scheme has given encouraging results in terms of energy savings compared to other popular implementations. The overall packet loss is about 12%. The battery life time increase compared to B-MAC varies from a minimum of 30% to about 90% depending on the duty cycle.
Resumo:
We are addressing a new problem of improving automatic speech recognition performance, given multiple utterances of patterns from the same class. We have formulated the problem of jointly decoding K multiple patterns given a single Hidden Markov Model. It is shown that such a solution is possible by aligning the K patterns using the proposed Multi Pattern Dynamic Time Warping algorithm followed by the Constrained Multi Pattern Viterbi Algorithm The new formulation is tested in the context of speaker independent isolated word recognition for both clean and noisy patterns. When 10 percent of speech is affected by a burst noise at -5 dB Signal to Noise Ratio (local), it is shown that joint decoding using only two noisy patterns reduces the noisy speech recognition error rate to about 51 percent, when compared to the single pattern decoding using the Viterbi Algorithm. In contrast a simple maximization of individual pattern likelihoods, provides only about 7 percent reduction in error rate.
Resumo:
Sequence design problems are considered in this paper. The problem of sum power minimization in a spread spectrum system can be reduced to the problem of sum capacity maximization, and vice versa. A solution to one of the problems yields a solution to the other. Subsequently, conceptually simple sequence design algorithms known to hold for the white-noise case are extended to the colored noise case. The algorithms yield an upper bound of 2N - L on the number of sequences where N is the processing gain and L the number of non-interfering subsets of users. If some users (at most N - 1) are allowed to signal along a limited number of multiple dimensions, then N orthogonal sequences suffice.
Resumo:
The structure of the peptide Boc-Ala-Leu-Ac(7)c-Ala-Leu-Ac(7)c-OMe (Ac(7)c,1-aminocycloheptane-1-carboxylic acid) is described in crystals. The presence of two Ac(7)c residues was expected to stabilize a 3(10)-helical fold. Contrary to expectation the structural analysis revealed an unfolded amino terminus, with Ala(1) adopting an extended beta-conformation (phi = -93degrees,psi = 112degrees). Residues 2-5 form a 3(10)-helix, stabilized by three successive intramolecular hydrogen bonds. Notably, two NH groups Ala(1) and Ac(7)c(3) do not form any hydrogen bonds in the crystal. Peptide assembly appears to be dominated by packing of the cycloheptane rings that stack against one another within the molecule and also throughout the crystal in columns.
Resumo:
A theory and generalized synthesis procedure is advocated for the design of weir notches and orifice-notches having a base in any given shape, to a depth a, such that the discharge through it is proportional to any singular monotonically-increasing function of the depth of flow measured above a certain datum. The problem is reduced to finding an exact solution of a Volterra integral equation in Abel form. The maximization of the depth of the datum below the crest of the notch is investigated. Proof is given that for a weir notch made out of one continuous curve, and for a flow proportional to the mth power of the head, it is impossible to bring the datum lower than (2m − 1)a below the crest of the notch. A new concept of an orifice-notch, having discontinuity in the curve and a division of flow into two distinct portions, is presented. The division of flow is shown to have a beneficial effect in reducing the datum below (2m − 1)a from the crest of the weir and still maintaining the proportionality of the flow. Experimental proof with one such orifice-notch is found to have a constant coefficient of discharge of 0.625. The importance of this analysis in the design of grit chambers is emphasized.
Resumo:
The violation of the Svetlichny's inequality (SI) [Phys. Rev. D 35, 3066 (1987)] is sufficient but not necessary for genuine tripartite nonlocal correlations. Here we quantify the relationship between tripartite entanglement and the maximum expectation value of the Svetlichny operator (which is bounded from above by the inequality) for the two inequivalent subclasses of pure three-qubit states: the Greenberger-Horne-Zeilinger (GHZ) class and the W class. We show that the maximum for the GHZ-class states reduces to Mermin's inequality [Phys. Rev. Lett. 65, 1838 (1990)] modulo a constant factor, and although it is a function of the three tangle and the residual concurrence, large numbers of states do not violate the inequality. We further show that by design SI is more suitable as a measure of genuine tripartite nonlocality between the three qubits in the W-class states,and the maximum is a certain function of the bipartite entanglement (the concurrence) of the three reduced states, and only when their sum attains a certain threshold value do they violate the inequality.
Resumo:
We derive the thermal correlators for twisted quantum fields on noncommutative spacetime. We show that the thermal expectation value of the number operator is same as in commutative spacetime, but that higher correlators are sensitive to the noncommutativity parameters phi(mu nu).
Resumo:
Motivated by certain situations in manufacturing systems and communication networks, we look into the problem of maximizing the profit in a queueing system with linear reward and cost structure and having a choice of selecting the streams of Poisson arrivals according to an independent Markov chain. We view the system as a MMPP/GI/1 queue and seek to maximize the profits by optimally choosing the stationary probabilities of the modulating Markov chain. We consider two formulations of the optimization problem. The first one (which we call the PUT problem) seeks to maximize the profit per unit time whereas the second one considers the maximization of the profit per accepted customer (the PAC problem). In each of these formulations, we explore three separate problems. In the first one, the constraints come from bounding the utilization of an infinite capacity server; in the second one the constraints arise from bounding the mean queue length of the same queue; and in the third one the finite capacity of the buffer reflect as a set of constraints. In the problems bounding the utilization factor of the queue, the solutions are given by essentially linear programs, while the problems with mean queue length constraints are linear programs if the service is exponentially distributed. The problems modeling the finite capacity queue are non-convex programs for which global maxima can be found. There is a rich relationship between the solutions of the PUT and PAC problems. In particular, the PUT solutions always make the server work at a utilization factor that is no less than that of the PAC solutions.
Resumo:
Precipitation in small droplets involving emulsions, microemulsions or vesicles is important for Producing multicomponent ceramics and nanoparticles. Because of the random nature of nucleation and the small number of particles in a droplet, the use of a deterministic population balance equation for predicting the number density of particles may lead to erroneous results even for evaluating the mean behavior of such systems. A comparison between the predictions made through stochastic simulation and deterministic population balance involving small droplets has been made for two simple systems, one involving crystallization and the other a single-component precipitation. The two approaches have been found to yield quite different results under a variety of conditions. Contrary to expectation, the smallness of the population alone does not cause these deviations. Thus, if fluctuation in supersaturation is negligible, the population balance and simulation predictions concur. However, for large fluctuations in supersaturation, the predictions differ significantly, indicating the need to take the stochastic nature of the phenomenon into account. This paper describes the stochastic treatment of populations, which involves a sequence of so-called product density equations and forms an appropriate framework for handling small systems.
Resumo:
This paper analyses the behaviour of a general class of learning automata algorithms for feedforward connectionist systems in an associative reinforcement learning environment. The type of connectionist system considered is also fairly general. The associative reinforcement learning task is first posed as a constrained maximization problem. The algorithm is approximated hy an ordinary differential equation using weak convergence techniques. The equilibrium points of the ordinary differential equation are then compared with the solutions to the constrained maximization problem to show that the algorithm does behave as desired.
Resumo:
We study the problem of matching applicants to jobs under one-sided preferences; that is, each applicant ranks a non-empty subset of jobs under an order of preference, possibly involving ties. A matching M is said to be more popular than T if the applicants that prefer M to T outnumber those that prefer T to M. A matching is said to be popular if there is no matching more popular than it. Equivalently, a matching M is popular if phi(M, T) >= phi(T, M) for all matchings T, where phi(X, Y) is the number of applicants that prefer X to Y. Previously studied solution concepts based on the popularity criterion are either not guaranteed to exist for every instance (e.g., popular matchings) or are NP-hard to compute (e.g., least unpopular matchings). This paper addresses this issue by considering mixed matchings. A mixed matching is simply a probability distribution over matchings in the input graph. The function phi that compares two matchings generalizes in a natural manner to mixed matchings by taking expectation. A mixed matching P is popular if phi(P, Q) >= phi(Q, P) for all mixed matchings Q. We show that popular mixed matchings always exist and we design polynomial time algorithms for finding them. Then we study their efficiency and give tight bounds on the price of anarchy and price of stability of the popular matching problem. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We have performed density functional calculations on tetragonal SnO and PbO (litharge) in the space group P4/nmm with the specific intention of examining the role played by Sn 5s and Pb 6s lone pairs in stabilizing the structure, and in giving rise to semi-metallic behavior (of SnO at ambient pressure and of PbO in the gamma phase). Use of the electron localization function has permitted real-space visualization of the lone pair in these structures. We also discuss the electronic structure of the orthorhombic PbO (massicot, space group Pbma) which again has localized lone pairs, contrary to some earlier expectation. (C) 2002 Editions scientifiques et medicales Elsevier SAS. All rights reserved.
Resumo:
Combinatorial exchanges are double sided marketplaces with multiple sellers and multiple buyers trading with the help of combinatorial bids. The allocation and other associated problems in such exchanges are known to be among the hardest to solve among all economic mechanisms. It has been shown that the problems of surplus maximization or volume maximization in combinatorial exchanges are inapproximable even with free disposal. In this paper, the surplus maximization problem is formulated as an integer linear programming problem and we propose a Lagrangian relaxation based heuristic to find a near optimal solution. We develop computationally efficient tâtonnement mechanisms for clearing combinatorial exchanges where the Lagrangian multipliers can be interpreted as the prices of the items set by the exchange in each iteration. Our mechanisms satisfy Individual-rationality and Budget-nonnegativity properties. The computational experiments performed on representative data sets show that the proposed heuristic produces a feasible solution with negligible optimality gap.