990 resultados para NP-dur


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the problem of matching applicants to jobs under one-sided preferences; that is, each applicant ranks a non-empty subset of jobs under an order of preference, possibly involving ties. A matching M is said to be more popular than T if the applicants that prefer M to T outnumber those that prefer T to M. A matching is said to be popular if there is no matching more popular than it. Equivalently, a matching M is popular if phi(M, T) >= phi(T, M) for all matchings T, where phi(X, Y) is the number of applicants that prefer X to Y. Previously studied solution concepts based on the popularity criterion are either not guaranteed to exist for every instance (e.g., popular matchings) or are NP-hard to compute (e.g., least unpopular matchings). This paper addresses this issue by considering mixed matchings. A mixed matching is simply a probability distribution over matchings in the input graph. The function phi that compares two matchings generalizes in a natural manner to mixed matchings by taking expectation. A mixed matching P is popular if phi(P, Q) >= phi(Q, P) for all mixed matchings Q. We show that popular mixed matchings always exist and we design polynomial time algorithms for finding them. Then we study their efficiency and give tight bounds on the price of anarchy and price of stability of the popular matching problem. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we study the problem of wireless sensor network design by deploying a minimum number of additional relay nodes (to minimize network design cost) at a subset of given potential relay locationsin order to convey the data from already existing sensor nodes (hereafter called source nodes) to a Base Station within a certain specified mean delay bound. We formulate this problem in two different ways, and show that the problem is NP-Hard. For a problem in which the number of existing sensor nodes and potential relay locations is n, we propose an O(n) approximation algorithm of polynomial time complexity. Results show that the algorithm performs efficiently (in over 90% of the tested scenarios, it gave solutions that were either optimal or exceeding optimal just by one relay) in various randomly generated network scenarios.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pd/CeO2 (1 at. %) prepared by the solution-combustion method shows a higher catalytic activity for CO oxidation and NO reduction than Pd metal, PdO, and Pd dispersed over CeO2 by the conventional method. To understand the higher catalytic properties, the structure of 1 at. % Pd/CeO2 catalyst material has been investigated by X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), and extended X-ray absorption fine structure (EXAFS) spectroscopy. The diffraction lines corresponding to Pd or PdO are not observed in the high-resolution XRD pattern of 1 at. % Pd/CeO2. The structure of 1 at. % Pd/CeO2 could be refined for the composition of Ce0.99Pd0.01O1.90 in the fluorite structure with 5% oxide ion vacancy. Pd(3d) peaks in the XPS in I at. % Pd/CeO2 are shifted by 3 eV indicating that Pd is in a highly ionic +2 state. EXAFS studies show the average coordination number of 3 around Pd2+ ion in the first shell of 1 at. % Pd/CeO2 at a distance of 2.02 Angstrom, instead of 4 as in PdO. The second shell at 2.72 Angstrom is due to Pd-Pd correlation which is larger than 2.69 Angstrom in PdO. The third shell at 3.31 Angstrom having 7 coordination is absent either in Pd metal or PdO, which can be attributed to -Pd2+-Ce4+- correlation. Thus, 1 at. % Pd/CeO2 forms the Ce1-xPdxO2-delta type of solid solution having -Pd2+-O-2-Ce4+- kinds of linkages.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The structure and chemical environment of Cu in Cu/CeO2 catalysts synthesized by the solution combustion method have been investigated by X-ray diffraction (XRD), transmission electron microscopy (TEM), electron paramagnetic resonance (EPR) spectroscopy, X-ray photoelectron spectroscopy (XPS), cyclic voltammetry (CV), and extended X-ray fine structure (EXAFS) spectroscopy. High-resolution XRD studies of 3 and 5 atom % Cu/CeO2 do not show CuO lines in their respective patterns. The structure could be refined for the composition Ce1-xCuxO2-delta (x = 0.03 and 0.05; delta similar to 0.13 and 0.16) in the fluorite structure with 5-8% oxide ion vacancy. High-resolution TEM did not show CuO particles in 5 atom % Cu/CeO2. EPR as well as XPS studies confirm the presence of Cu2+ species in the CeO2 matrix. Redox potentials of Cu species in the CeO2 matrix are lower than those in CuO. EXAFS investigations of these catalysts show an average coordination number of 3 around the Cu2+ ion in the first shell at a distance of 1.96 Angstrom, indicating the O2- ion vacancy around the Cu2+ ion. The Cu-O bond length also decreases compared to that in CuO. The second and third shell around the Cu2+ ion in the catalysts are attributed to -Cu2+-O2--Cu2+ - at 2.92 Angstrom and -Cu2+-O2--Ce4+- at the distance of 3.15 Angstrom, respectively. The present results provide direct evidence for the formation of a Ce1-xCuxO2-delta type of solid solution phase having -square-Cu2+-O-Ce4+- kind of linkages.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In computational molecular biology, the aim of restriction mapping is to locate the restriction sites of a given enzyme on a DNA molecule. Double digest and partial digest are two well-studied techniques for restriction mapping. While double digest is NP-complete, there is no known polynomial-time algorithm for partial digest. Another disadvantage of the above techniques is that there can be multiple solutions for reconstruction. In this paper, we study a simple technique called labeled partial digest for restriction mapping. We give a fast polynomial time (O(n(2) log n) worst-case) algorithm for finding all the n sites of a DNA molecule using this technique. An important advantage of the algorithm is the unique reconstruction of the DNA molecule from the digest. The technique is also robust in handling errors in fragment lengths which arises in the laboratory. We give a robust O(n(4)) worst-case algorithm that can provably tolerate an absolute error of O(Delta/n) (where Delta is the minimum inter-site distance), while giving a unique reconstruction. We test our theoretical results by simulating the performance of the algorithm on a real DNA molecule. Motivated by the similarity to the labeled partial digest problem, we address a related problem of interest-the de novo peptide sequencing problem (ACM-SIAM Symposium on Discrete Algorithms (SODA), 2000, pp. 389-398), which arises in the reconstruction of the peptide sequence of a protein molecule. We give a simple and efficient algorithm for the problem without using dynamic programming. The algorithm runs in time O(k log k), where k is the number of ions and is an improvement over the algorithm in Chen et al. (C) 2002 Elsevier Science (USA). All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Evolution of deformation texture in commercially pure titanium with submicron grain size (SMG) was studied using x-ray diffraction (XRD) and electron back scatter diffraction (EBSD) methods. The material was deformed by rolling at room temperature. The deformation mechanism was found to be slip dominated with a pyramidal slip system facilitating plastic deformation. No evidence of tensile or compressive twinning was detected, as generally seen in the case of titanium with conventional microcrystalline grain size. The absence of twinning and the propensity of the pyramidal slip system in the SMG Ti is attributed to the lack of coordinated motion of zonal partial dislocations that leads to twinning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report a low-temperature synthesis of La1.95Na0.05NiO4 from NaOH flux, La0.97K0.03NiO3 and La0.95K0.05Ni0.85Cu0.15O3 phases from KOH flux at 400 degreesC. Alkali-doped LaNiO3 can be prepared in KOH, but not in NaOH flux and La2NiO4 can be prepared in NaOH, but not in KOH flux. The flux-grown oxides were characterized by powder X-ray Rietveld profile analysis and electron microscopy. Sodium doped La2NiO4 crystallizes in orthorhombic structure and potassium doped LaNiO3-phases crystallizes in rhombohedral structure. La1.95Na0.05NiO4 is weakly paramagnetic and semiconducting while La0.97K0.03NiO3 and La0.95K0.05Ni0.85Cu0.15O3 show Pauli paramagnetic and metallic behavior. (C) 2002 Editions scientifiques et medicales Elsevier SAS. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the following problem: given a set of jobs and a set of people with preferences over the jobs, what is the optimal way of matching people to jobs? Here we consider the notion of popularity. A matching M is popular if there is no matching M' such that more people prefer M' to M than the other way around. Determining whether a given instance admits a popular matching and, if so, finding one, was studied by Abraham et al. (SIAM J. Comput. 37(4):1030-1045, 2007). If there is no popular matching, a reasonable substitute is a matching whose unpopularity is bounded. We consider two measures of unpopularity-unpopularity factor denoted by u(M) and unpopularity margin denoted by g(M). McCutchen recently showed that computing a matching M with the minimum value of u(M) or g(M) is NP-hard, and that if G does not admit a popular matching, then we have u(M) >= 2 for all matchings M in G. Here we show that a matching M that achieves u(M) = 2 can be computed in O(m root n) time (where m is the number of edges in G and n is the number of nodes) provided a certain graph H admits a matching that matches all people. We also describe a sequence of graphs: H = H(2), H(3), ... , H(k) such that if H(k) admits a matching that matches all people, then we can compute in O(km root n) time a matching M such that u(M) <= k - 1 and g(M) <= n(1 - 2/k). Simulation results suggest that our algorithm finds a matching with low unpopularity in random instances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we consider the problem of selecting, for any given positive integer k, the top-k nodes in a social network, based on a certain measure appropriate for the social network. This problem is relevant in many settings such as analysis of co-authorship networks, diffusion of information, viral marketing, etc. However, in most situations, this problem turns out to be NP-hard. The existing approaches for solving this problem are based on approximation algorithms and assume that the objective function is sub-modular. In this paper, we propose a novel and intuitive algorithm based on the Shapley value, for efficiently computing an approximate solution to this problem. Our proposed algorithm does not use the sub-modularity of the underlying objective function and hence it is a general approach. We demonstrate the efficacy of the algorithm using a co-authorship data set from e-print arXiv (www.arxiv.org), having 8361 authors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Earlier studies have exploited statistical multiplexing of flows in the core of the Internet to reduce the buffer requirement in routers. Reducing the memory requirement of routers is important as it enables an improvement in performance and at the same time a decrease in the cost. In this paper, we observe that the links in the core of the Internet are typically over-provisioned and this can be exploited to reduce the buffering requirement in routers. The small on-chip memory of a network processor (NP) can be effectively used to buffer packets during most regimes of traffic. We propose a dynamic buffering strategy which buffers packets in the receive and transmit buffers of a NP when the memory requirement is low. When the buffer requirement increases due to bursts in the traffic, memory is allocated to packets in the off-chip DRAM. This scheme effectively mitigates the DRAM access bottleneck, as only a part of the traffic is stored in the DRAM. We build a Petri net model and evaluate the proposed scheme with core Internet like traffic. At 77% link utilization, the dynamic buffering scheme has a drop rate of just 0.65%, whereas the traditional DRAM buffering has 4.64% packet drop rate. Even with a high link utilization of 90%, which rarely happens in the core, our dynamic buffering results in a packet drop rate of only 2.17%, while supporting a throughput of 7.39 Gbps. We study the proposed scheme under different conditions to understand the provisioning of processing threads and to determine the queue length at which packets must be buffered in the DRAM. We show that the proposed dynamic buffering strategy drastically reduces the buffering requirement while still maintaining low packet drop rates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we are concerned with finding the maximum throughput that a mobile ad hoc network can support. Even when nodes are stationary, the problem of determining the capacity region has long been known to be NP-hard. Mobility introduces an additional dimension of complexity because nodes now also have to decide when they should initiate route discovery. Since route discovery involves communication and computation overhead, it should not be invoked very often. On the other hand, mobility implies that routes are bound to become stale resulting in sub-optimal performance if routes are not updated. We attempt to gain some understanding of these effects by considering a simple one-dimensional network model. The simplicity of our model allows us to use stochastic dynamic programming (SDP) to find the maximum possible network throughput with ideal routing and medium access control (MAC) scheduling. Using the optimal value as a benchmark, we also propose and evaluate the performance of a simple threshold-based heuristic. Unlike the optimal policy which requires considerable state information, the heuristic is very simple to implement and is not overly sensitive to the threshold value used. We find empirical conditions for our heuristic to be near-optimal as well as network scenarios when our simple heuristic does not perform very well. We provide extensive numerical and simulation results for different parameter settings of our model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The effect of strain path change during rolling has been investigated for copper and nickel using X-ray diffraction and electron back scatter diffraction as well as crystal plasticity simulations. Four different strain paths namely: (i) unidirectional rolling; (ii) reverse rolling; (iii) two-step cross rolling and (iv) multi-step cross rolling were employed to decipher the effect of strain path change on the evolution of deformation texture and microstructure. The cross rolled samples showed weaker texture with a prominent Bs {1 1 0}< 1 1 2 > and P(B(ND)) {1 1 0}< 1 1 1 > component in contrast to the unidirectional and reverse rolled samples where strong S {1 2 3}< 6 3 4 > and Cu {1 1 2}< 1 1 1 > components were formed. This was more pronounced for copper samples compared to nickel. The cross rolled samples were characterized by lower anisotropy and Taylor factor as well as less variation in Lankford parameter. Viscoplastic self-consistent simulations indicated that slip activity on higher number of octahedral slip systems can explain the weaker texture as well as reduced anisotropy in the cross rolled samples. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider a dense ad hoc wireless network comprising n nodes confined to a given two dimensional region of fixed area. For the Gupta-Kumar random traffic model and a realistic interference and path loss model (i.e., the channel power gains are bounded above, and are bounded below by a strictly positive number), we study the scaling of the aggregate end-to-end throughput with respect to the network average power constraint, P macr, and the number of nodes, n. The network power constraint P macr is related to the per node power constraint, P macr, as P macr = np. For large P, we show that the throughput saturates as Theta(log(P macr)), irrespective of the number of nodes in the network. For moderate P, which can accommodate spatial reuse to improve end-to-end throughput, we observe that the amount of spatial reuse feasible in the network is limited by the diameter of the network. In fact, we observe that the end-to-end path loss in the network and the amount of spatial reuse feasible in the network are inversely proportional. This puts a restriction on the gains achievable using the cooperative communication techniques studied in and, as these rely on direct long distance communication over the network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A "plan diagram" is a pictorial enumeration of the execution plan choices of a database query optimizer over the relational selectivity space. We have shown recently that, for industrial-strength database engines, these diagrams are often remarkably complex and dense, with a large number of plans covering the space. However, they can often be reduced to much simpler pictures, featuring significantly fewer plans, without materially affecting the query processing quality. Plan reduction has useful implications for the design and usage of query optimizers, including quantifying redundancy in the plan search space, enhancing useability of parametric query optimization, identifying error-resistant and least-expected-cost plans, and minimizing the overheads of multi-plan approaches. We investigate here the plan reduction issue from theoretical, statistical and empirical perspectives. Our analysis shows that optimal plan reduction, w.r.t. minimizing the number of plans, is an NP-hard problem in general, and remains so even for a storage-constrained variant. We then present a greedy reduction algorithm with tight and optimal performance guarantees, whose complexity scales linearly with the number of plans in the diagram for a given resolution. Next, we devise fast estimators for locating the best tradeoff between the reduction in plan cardinality and the impact on query processing quality. Finally, extensive experimentation with a suite of multi-dimensional TPCH-based query templates on industrial-strength optimizers demonstrates that complex plan diagrams easily reduce to "anorexic" (small absolute number of plans) levels incurring only marginal increases in the estimated query processing costs.