891 resultados para Problem of evil
Resumo:
We consider the problem of compression via homomorphic encoding of a source having a group alphabet. This is motivated by the problem of distributed function computation, where it is known that if one is only interested in computing a function of several sources, then one can at times improve upon the compression rate required by the Slepian-Wolf bound. The functions of interest are those which could be represented by the binary operation in the group. We first consider the case when the source alphabet is the cyclic Abelian group, Zpr. In this scenario, we show that the set of achievable rates provided by Krithivasan and Pradhan [1], is indeed the best possible. In addition to that, we provide a simpler proof of their achievability result. In the case of a general Abelian group, an improved achievable rate region is presented than what was obtained by Krithivasan and Pradhan. We then consider the case when the source alphabet is a non-Abelian group. We show that if all the source symbols have non-zero probability and the center of the group is trivial, then it is impossible to compress such a source if one employs a homomorphic encoder. Finally, we present certain non-homomorphic encoders, which also are suitable in the context of function computation over non-Abelian group sources and provide rate regions achieved by these encoders.
Resumo:
We consider the problem of wireless channel allocation to multiple users. A slot is given to a user with a highest metric (e.g., channel gain) in that slot. The scheduler may not know the channel states of all the users at the beginning of each slot. In this scenario opportunistic splitting is an attractive solution. However this algorithm requires that the metrics of different users form independent, identically distributed (iid) sequences with same distribution and that their distribution and number be known to the scheduler. This limits the usefulness of opportunistic splitting. In this paper we develop a parametric version of this algorithm. The optimal parameters of the algorithm are learnt online through a stochastic approximation scheme. Our algorithm does not require the metrics of different users to have the same distribution. The statistics of these metrics and the number of users can be unknown and also vary with time. Each metric sequence can be Markov. We prove the convergence of the algorithm and show its utility by scheduling the channel to maximize its throughput while satisfying some fairness and/or quality of service constraints.
Resumo:
A method for the explicit determination of the polar decomposition (and the related problem of finding tensor square roots) when the underlying vector space dimension n is arbitrary (but finite), is proposed. The method uses the spectral resolution, and avoids the determination of eigenvectors when the tensor is invertible. For any given dimension n, an appropriately constructed van der Monde matrix is shown to play a key role in the construction of each of the component matrices (and their inverses) in the polar decomposition.
Resumo:
The problem of finding the horizontal pullout capacity of vertical anchors embedded in sands with the inclusion of pseudostatic horizontal earthquake body forces, was tackled in this note. The analysis was carried out using an upper bound limit analysis, with the consideration of two different collapse mechanisms: bilinear and composite logarithmic spiral rupture surfaces. The results are presented in nondimensional form to find the pullout resistance with changes in earthquake acceleration for different combinations of embedment ratio of the anchor (lambda), friction angle of the soil (phi), and the anchor-soil interface wall friction angle (delta). The pullout resistance decreases quite substantially with increases in the magnitude of the earthquake acceleration. For values of delta up to about 0.25-0.5phi, the bilinear and composite logarithmic spiral rupture surfaces gave almost identical answers, whereas for higher values of delta, the choice of the logarithmic spiral provides significantly smaller pullout resistance. The results compare favorably with the existing theoretical data.
Resumo:
In this paper, we outline a systematic procedure for scaling analysis of momentum and heat transfer in laser melted pools. With suitable choices of non-dimensionalising parameters, the governing equations coupled with appropriate boundary conditions are first scaled, and the relative significance of various terms appearing in them are accordingly analysed. The analysis is then utilised to predict the orders of magnitude of some important quantities, such as the velocity scale at the top surface, velocity boundary layer thickness, maximum temperature rise in the pool, fully developed pool-depth, and time required for initiation of melting. Using the scaling predictions, the influence of various processing parameters on the system variables can be well recognised, which enables us to develop a deeper insight into the physical problem of interest. Moreover, some of the quantities predicted from the scaling analysis can be utilised for optimised selection of appropriate grid-size and time-steps for full numerical simulation of the process. The scaling predictions are finally assessed by comparison with experimental and numerical results quoted in the literature, and an excellent qualitative agreement is observed.
Resumo:
Angiogenin is a protein belonging to the superfamily of RNase A. The RNase activity of this protein is essential for its angiogenic activity. Although members of the RNase A family carry out RNase activity, they differ markedly in their strength and specificity. In this paper, we address the problem of higher specificity of angiogenin towards cytosine against uracil in the first base binding position. We have carried out extensive nano-second level molecular dynamics(MD) computer simulations on the native bovine angiogenin and on the CMP and UMP complexes of this protein in aqueous medium with explicit molecular solvent. The structures thus generated were subjected to a rigorous free energy component analysis to arrive at a plausible molecular thermodynamic explanation for the substrate specificity of angiogenin.
Resumo:
In computational molecular biology, the aim of restriction mapping is to locate the restriction sites of a given enzyme on a DNA molecule. Double digest and partial digest are two well-studied techniques for restriction mapping. While double digest is NP-complete, there is no known polynomial-time algorithm for partial digest. Another disadvantage of the above techniques is that there can be multiple solutions for reconstruction. In this paper, we study a simple technique called labeled partial digest for restriction mapping. We give a fast polynomial time (O(n(2) log n) worst-case) algorithm for finding all the n sites of a DNA molecule using this technique. An important advantage of the algorithm is the unique reconstruction of the DNA molecule from the digest. The technique is also robust in handling errors in fragment lengths which arises in the laboratory. We give a robust O(n(4)) worst-case algorithm that can provably tolerate an absolute error of O(Delta/n) (where Delta is the minimum inter-site distance), while giving a unique reconstruction. We test our theoretical results by simulating the performance of the algorithm on a real DNA molecule. Motivated by the similarity to the labeled partial digest problem, we address a related problem of interest-the de novo peptide sequencing problem (ACM-SIAM Symposium on Discrete Algorithms (SODA), 2000, pp. 389-398), which arises in the reconstruction of the peptide sequence of a protein molecule. We give a simple and efficient algorithm for the problem without using dynamic programming. The algorithm runs in time O(k log k), where k is the number of ions and is an improvement over the algorithm in Chen et al. (C) 2002 Elsevier Science (USA). All rights reserved.
Resumo:
The production of rainfed crops in semi-arid tropics exhibits large variation in response to the variation in seasonal rainfall. There are several farm-level decisions such as the choice of cropping pattern, whether to invest in fertilizers, pesticides etc., the choice of the period for planting, plant population density etc. for which the appropriate choice (associated with maximum production or minimum risk) depends upon the nature of the rainfall variability or the prediction for a specific year. In this paper, we have addressed the problem of identifying the appropriate strategies for cultivation of rainfed groundnut in the Anantapur region in a semi-arid part of the Indian peninsula. The approach developed involves participatory research with active collaboration with farmers, so that the problems with perceived need are addressed with the modern tools and data sets available. Given the large spatial variation of climate and soil, the appropriate strategies are necessarily location specific. With the approach adopted, it is possible to tap the detailed location specific knowledge of the complex rainfed ecosystem and gain an insight into the variety of options of land use and management practices available to each category of stakeholders. We believe such a participatory approach is essential for identifying strategies that have a favourable cost-benefit ratio over the region considered and hence are associated with a high chance of acceptance by the stakeholders. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Model Reference Adaptive Control (MRAC) of a wide repertoire of stable Linear Time Invariant (LTI) systems is addressed here. Even an upper bound on the order of the finite-dimensional system is unavailable. Further, the unknown plant is permitted to have both minimum phase and nonminimum phase zeros. Model following with reference to a completely specified reference model excited by a class of piecewise continuous bounded signals is the goal. The problem is approached by taking recourse to the time moments representation of an LTI system. The treatment here is confined to Single-Input Single-Output (SISO) systems. The adaptive controller is built upon an on-line scheme for time moment estimation of a system given no more than its input and output. As a first step, a cascade compensator is devised. The primary contribution lies in developing a unified framework to eventually address with more finesse the problem of adaptive control of a large family of plants allowed to be minimum or nonminimum phase. Thus, the scheme presented in this paper is confined to lay the basis for more refined compensators-cascade, feedback and both-initially for SISO systems and progressively for Multi-Input Multi-Output (MIMO) systems. Simulations are presented.
Resumo:
We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.
Resumo:
We consider the problem of compression of a non-Abelian source.This is motivated by the problem of distributed function computation,where it is known that if one is only interested in computing a function of several sources, then one can often improve upon the compression rate required by the Slepian-Wolf bound. Let G be a non-Abelian group having center Z(G). We show here that it is impossible to compress a source with symbols drawn from G when Z(G) is trivial if one employs a homomorphic encoder and a typical-set decoder.We provide achievable upper bounds on the minimum rate required to compress a non-Abelian group with non-trivial center. Also, in a two source setting, we provide achievable upper bounds for compression of any non-Abelian group, using a non-homomorphic encoder.
Resumo:
One of the assumptions of the van der Waals and Platteeuw theory for gas hydrates is that the host water lattice is rigid and not distorted by the presence of guest molecules. In this work, we study the effect of this approximation on the triple-point lines of the gas hydrates. We calculate the triple-point lines of methane and ethane hydrates via Monte Carlo molecular simulations and compare the simulation results with the predictions of van der Waals and Platteeuw theory. Our study shows that even if the exact intermolecular potential between the guest molecules and water is known, the dissociation temperatures predicted by the theory are significantly higher. This has serious implications to the modeling of gas hydrate thermodynamics, and in spite of the several impressive efforts made toward obtaining an accurate description of intermolecular interactions in gas hydrates, the theory will suffer from the problem of robustness if the issue of movement of water molecules is not adequately addressed.
Resumo:
In this paper we address the problem of transmission of correlated sources over a fading multiple access channel (MAC). We provide sufficient conditions for transmission with given distortions. Next these conditions are specialized to a Gaussian MAC (GMAC). Transmission schemes for discrete and Gaussian sources over a fading GMAC are considered. Various power allocation strategies are also compared. Keywords: Fading MAC, Power allocation, Random TDMA, Amplify and Forward, Correlated sources.
Resumo:
We address the problem of local-polynomial modeling of smooth time-varying signals with unknown functional form, in the presence of additive noise. The problem formulation is in the time domain and the polynomial coefficients are estimated in the pointwise minimum mean square error (PMMSE) sense. The choice of the window length for local modeling introduces a bias-variance tradeoff, which we solve optimally by using the intersection-of-confidence-intervals (ICI) technique. The combination of the local polynomial model and the ICI technique gives rise to an adaptive signal model equipped with a time-varying PMMSE-optimal window length whose performance is superior to that obtained by using a fixed window length. We also evaluate the sensitivity of the ICI technique with respect to the confidence interval width. Simulation results on electrocardiogram (ECG) signals show that at 0dB signal-to-noise ratio (SNR), one can achieve about 12dB improvement in SNR. Monte-Carlo performance analysis shows that the performance is comparable to the basic wavelet techniques. For 0 dB SNR, the adaptive window technique yields about 2-3dB higher SNR than wavelet regression techniques and for SNRs greater than 12dB, the wavelet techniques yield about 2dB higher SNR.
Resumo:
In this article we study the problem of joint congestion control, routing and MAC layer scheduling in multi-hop wireless mesh network, where the nodes in the network are subjected to maximum energy expenditure rates. We model link contention in the wireless network using the contention graph and we model energy expenditure rate constraint of nodes using the energy expenditure rate matrix. We formulate the problem as an aggregate utility maximization problem and apply duality theory in order to decompose the problem into two sub-problems namely, network layer routing and congestion control problem and MAC layer scheduling problem. The source adjusts its rate based on the cost of the least cost path to the destination where the cost of the path includes not only the prices of the links in it but also the prices associated with the nodes on the path. The MAC layer scheduling of the links is carried out based on the prices of the links. We study the e�ects of energy expenditure rate constraints of the nodes on the optimal throughput of the network.