19 resultados para Best match

em Indian Institute of Science - Bangalore - Índia


Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the problem of matching people to jobs, where each person ranks a subset of jobs in an order of preference, possibly involving ties. There are several notions of optimality about how to best match each person to a job; in particular, popularity is a natural and appealing notion of optimality. However, popular matchings do not always provide an answer to the problem of determining an optimal matching since there are simple instances that do not adroit popular matchings. This motivates the following extension of the popular rnatchings problem:Given a graph G; = (A boolean OR J, E) where A is the set of people and J is the set of jobs, and a list < c(1), c(vertical bar J vertical bar)) denoting upper bounds on the capacities of each job, does there exist (x(1), ... , x(vertical bar J vertical bar)) such that setting the capacity of i-th, job to x(i) where 1 <= x(i) <= c(i), for each i, enables the resulting graph to admit a popular matching. In this paper we show that the above problem is NP-hard. We show that the problem is NP-hard even when each c is 1 or 2.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the problem of matching people to items, where each person ranks a subset of items in an order of preference, possibly involving ties. There are several notions of optimality about how to best match a person to an item; in particular, popularity is a natural and appealing notion of optimality. A matching M* is popular if there is no matching M such that the number of people who prefer M to M* exceeds the number who prefer M* to M. However, popular matchings do not always provide an answer to the problem of determining an optimal matching since there are simple instances that do not admit popular matchings. This motivates the following extension of the popular matchings problem: Given a graph G = (A U 3, E) where A is the set of people and 2 is the set of items, and a list < c(1),...., c(vertical bar B vertical bar)> denoting upper bounds on the number of copies of each item, does there exist < x(1),...., x(vertical bar B vertical bar)> such that for each i, having x(i) copies of the i-th item, where 1 <= xi <= c(i), enables the resulting graph to admit a popular matching? In this paper we show that the above problem is NP-hard. We show that the problem is NP-hard even when each c(i) is 1 or 2. We show a polynomial time algorithm for a variant of the above problem where the total increase in copies is bounded by an integer k. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a general methodology for the synthesis of the external boundary of the workspaces of a planar manipulator with arbitrary topology. Both the desired workspace and the manipulator workspaces are identified by their boundaries and are treated as simple closed polygons. The paper introduces the concept of best match configuration and shows that the corresponding transformation can be obtained by using the concept of shape normalization available in image processing literature. Introduction of the concept of shape in workspace synthesis allows highly accurate synthesis with fewer numbers of design variables. This paper uses a new global property based vector representation for the shape of the workspaces which is computationally efficient because six out of the seven elements of this vector are obtained as a by-product of the shape normalization procedure. The synthesis of workspaces is formulated as an optimization problem where the distance between the shape vector of the desired workspace and that of the workspace of the manipulator at hand are minimized by changing the dimensional parameters of the manipulator. In view of the irregular nature of the error manifold, the statistical optimization procedure of simulated annealing has been used. A number of worked-out examples illustrate the generality and efficiency of the present method. (C) 1998 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we evaluate performance of a real-world image processing application that uses a cross-correlation algorithm to compare a given image with a reference one. The algorithm processes individual images represented as 2-dimensional matrices of single-precision floating-point values using O(n4) operations involving dot-products and additions. We implement this algorithm on a nVidia GTX 285 GPU using CUDA, and also parallelize it for the Intel Xeon (Nehalem) and IBM Power7 processors, using both manual and automatic techniques. Pthreads and OpenMP with SSE and VSX vector intrinsics are used for the manually parallelized version, while a state-of-the-art optimization framework based on the polyhedral model is used for automatic compiler parallelization and optimization. The performance of this algorithm on the nVidia GPU suffers from: (1) a smaller shared memory, (2) unaligned device memory access patterns, (3) expensive atomic operations, and (4) weaker single-thread performance. On commodity multi-core processors, the application dataset is small enough to fit in caches, and when parallelized using a combination of task and short-vector data parallelism (via SSE/VSX) or through fully automatic optimization from the compiler, the application matches or beats the performance of the GPU version. The primary reasons for better multi-core performance include larger and faster caches, higher clock frequency, higher on-chip memory bandwidth, and better compiler optimization and support for parallelization. The best performing versions on the Power7, Nehalem, and GTX 285 run in 1.02s, 1.82s, and 1.75s, respectively. These results conclusively demonstrate that, under certain conditions, it is possible for a FLOP-intensive structured application running on a multi-core processor to match or even beat the performance of an equivalent GPU version.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ASICs offer the best realization of DSP algorithms in terms of performance, but the cost is prohibitive, especially when the volumes involved are low. However, if the architecture synthesis trajectory for such algorithms is such that the target architecture can be identified as an interconnection of elementary parameterized computational structures, then it is possible to attain a close match, both in terms of performance and power with respect to an ASIC, for any algorithmic parameters of the given algorithm. Such an architecture is weakly programmable (configurable) and can be viewed as an application specific integrated processor (ASIP). In this work, we present a methodology to synthesize ASIPs for DSP algorithms. (C) 1999 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In many wireless applications, it is highly desirable to have a fast mechanism to resolve or select the packet from the user with the highest priority. Furthermore, individual priorities are often known only locally at the users. In this paper we introduce an extremely fast, local-information-based multiple access algorithm that selects the best node in 1.8 to 2.1 slots,which is much lower than the 2.43 slot average achieved by the best algorithm known to date. The algorithm, which we call Variable Power Multiple Access Selection (VP-MAS), uses the local channel state information from the accessing nodes to the receiver, and maps the priorities into the receive power.It is inherently distributed and scales well with the number of users. We show that mapping onto a discrete set of receive power levels is optimal, and provide a complete characterization for it. The power levels are chosen to exploit packet capture that inherently occurs in a wireless physical layer. The VP-MAS algorithm adjusts the expected number of users that contend in each step and their respective transmission powers, depending on whether previous transmission attempts resulted in capture,idle channel, or collision.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We use the HΙ scale height data along with the HΙ rotation curve as constraints to probe the shape and density profile of the dark matter halos of M31 (Andromeda) and the superthin, low surface brightness (LSB) galaxy UGC 07321. We model the galaxy as a two component system of gravitationally-coupled stars and gas subjected to the force field of a dark matter halo. For M31, we get a flattened halo which is required to match the outer galactic HΙ scale height data, with our best-fit axis ratio (0.4) lying at the most oblate end of the distributions obtained from cosmological simulations. For UGC 07321, our best-fit halo core radius is only slightly larger than the stellar disc scale length, indicating that the halo is important even at small radii in this LSB galaxy. The high value of the gas velocity dispersion required to match the scale height data can explain the low star-formation rate of this galaxy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advertisements(Ads) are the main revenue earner for Television (TV) broadcasters. As TV reaches a large audience, it acts as the best media for advertisements of products and services. With the emergence of digital TV, it is important for the broadcasters to provide an intelligent service according to the various dimensions like program features, ad features, viewers’ interest and sponsors’ preference. We present an automatic ad recommendation algorithm that selects a set of ads by considering these dimensions and semantically match them with programs. Features of the ad video are captured interms of annotations and they are grouped into number of predefined semantic categories by using a categorization technique. Fuzzy categorical data clustering technique is applied on categorized data for selecting better suited ads for a particular program. Since the same ad can be recommended for more than one program depending upon multiple parameters, fuzzy clustering acts as the best suited method for ad recommendation. The relative fuzzy score called “degree of membership” calculated for each ad indicates the membership of a particular ad to different program clusters. Subjective evaluation of the algorithm is done by 10 different people and rated with a high success score.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For an n(t) transmit, n(r) receive antenna system (n(t) x n(r) system), a full-rate space time block code (STBC) transmits at least n(min) = min(n(t), n(r))complex symbols per channel use. The well-known Golden code is an example of a full-rate, full-diversity STBC for two transmit antennas. Its ML-decoding complexity is of the order of M(2.5) for square M-QAM. The Silver code for two transmit antennas has all the desirable properties of the Golden code except its coding gain, but offers lower ML-decoding complexity of the order of M(2). Importantly, the slight loss in coding gain is negligible compared to the advantage it offers in terms of lowering the ML-decoding complexity. For higher number of transmit antennas, the best known codes are the Perfect codes, which are full-rate, full-diversity, information lossless codes (for n(r) >= n(t)) but have a high ML-decoding complexity of the order of M(ntnmin) (for n(r) < n(t), the punctured Perfect codes are considered). In this paper, a scheme to obtain full-rate STBCs for 2(a) transmit antennas and any n(r) with reduced ML-decoding complexity of the order of M(nt)(n(min)-3/4)-0.5 is presented. The codes constructed are also information lossless for >= n(t), like the Perfect codes, and allow higher mutual information than the comparable punctured Perfect codes for n(r) < n(t). These codes are referred to as the generalized Silver codes, since they enjoy the same desirable properties as the comparable Perfect codes (except possibly the coding gain) with lower ML-decoding complexity, analogous to the Silver code and the Golden code for two transmit antennas. Simulation results of the symbol error rates for four and eight transmit antennas show that the generalized Silver codes match the punctured Perfect codes in error performance while offering lower ML-decoding complexity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ASICs offer the best realization of DSP algorithms in terms of performance, but the cost is prohibitive, especially when the volumes involved are low. However, if the architecture synthesis trajectory for such algorithms is such that the target architecture can be identified as an interconnection of elementary parameterized computational structures, then it is possible to attain a close match, both in terms of performance and power with respect to an ASIC, for any algorithmic parameters of the given algorithm. Such an architecture is weakly programmable (configurable) and can be viewed as an application specific instruction-set processor (ASIP). In this work, we present a methodology to synthesize ASIPs for DSP algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we address a physics-based closed-form analytical model of flexural phonon-dependent diffusive thermal conductivity (kappa) of suspended rectangular single layer graphene sheet. A quadratic dependence of the out-of-plane phonon frequency, generally called flexural phonons, on the phonon wave vector has been taken into account to analyze the behavior of kappa at lower temperatures. Such a dependence has further been used for the determination of second-order three-phonon Umklapp and isotopic scatterings. We find that these behaviors in our model are best explained through the upper limit of Debye cut-off frequency in the second-order three-phonon Umklapp scattering of the long phonon waves that actually remove the thermal conductivity singularity by contributing a constant scattering rate at low frequencies and note that the out-of-plane Gruneisen parameter for these modes need not be too high. Using this, we clearly demonstrate that. follows a T-1.5 and T-2 law at lower and higher temperatures in the absence of isotopes, respectively. However in their presence, the behavior of kappa sharply deviates from the T-2 law at higher temperatures. The present geometry-dependent model of kappa is found to possess an excellent match with various experimental data over a wide range of temperatures which can be put forward for efficient electro-thermal analyses of encased/supported graphene.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The way in which basal tractions, associated with mantle convection, couples with the lithosphere is a fundamental problem in geodynamics. A successful lithosphere-mantle coupling model for the Earth will satisfy observations of plate motions, intraplate stresses, and the plate boundary zone deformation. We solve the depth integrated three-dimensional force balance equations in a global finite element model that takes into account effects of both topography and shallow lithosphere structure as well as tractions originating from deeper mantle convection. The contribution from topography and lithosphere structure is estimated by calculating gravitational potential energy differences. The basal tractions are derived from a fully dynamic flow model with both radial and lateral viscosity variations. We simultaneously fit stresses and plate motions in order to delineate a best-fit lithosphere-mantle coupling model. We use both the World Stress Map and the Global Strain Rate Model to constrain the models. We find that a strongly coupled model with a stiff lithosphere and 3-4 orders of lateral viscosity variations in the lithosphere are best able to match the observational constraints. Our predicted deviatoric stresses, which are dominated by contribution from mantle tractions, range between 20-70 MPa. The best-fitting coupled models predict strain rates that are consistent with observations. That is, the intraplate areas are nearly rigid whereas plate boundaries and some other continental deformation zones display high strain rates. Comparison of mantle tractions and surface velocities indicate that in most areas tractions are driving, although in a few regions, including western North America, tractions are resistive. Citation: Ghosh, A., W. E. Holt, and L. M. Wen (2013), Predicting the lithospheric stress field and plate motions by joint modeling of lithosphere and mantle dynamics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of finding a satisfying assignment that minimizes the number of variables that are set to 1 is NP-complete even for a satisfiable 2-SAT formula. We call this problem MIN ONES 2-SAT. It generalizes the well-studied problem of finding the smallest vertex cover of a graph, which can be modeled using a 2-SAT formula with no negative literals. The natural parameterized version of the problem asks for a satisfying assignment of weight at most k. In this paper, we present a polynomial-time reduction from MIN ONES 2-SAT to VERTEX COVER without increasing the parameter and ensuring that the number of vertices in the reduced instance is equal to the number of variables of the input formula. Consequently, we conclude that this problem also has a simple 2-approximation algorithm and a 2k - c logk-variable kernel subsuming (or, in the case of kernels, improving) the results known earlier. Further, the problem admits algorithms for the parameterized and optimization versions whose runtimes will always match the runtimes of the best-known algorithms for the corresponding versions of vertex cover. Finally we show that the optimum value of the LP relaxation of the MIN ONES 2-SAT and that of the corresponding VERTEX COVER are the same. This implies that the (recent) results of VERTEX COVER version parameterized above the optimum value of the LP relaxation of VERTEX COVER carry over to the MIN ONES 2-SAT version parameterized above the optimum of the LP relaxation of MIN ONES 2-SAT. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The distributed, low-feedback, timer scheme is used in several wireless systems to select the best node from the available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal metric-to-timer mappings for the practical scenario where the number of nodes is unknown. We consider two cases in which the probability distribution of the number of nodes is either known a priori or is unknown. In the first case, the optimal mapping maximizes the success probability averaged over the probability distribution. In the second case, a robust mapping maximizes the worst case average success probability over all possible probability distributions on the number of nodes. Results reveal that the proposed mappings deliver significant gains compared to the mappings considered in the literature.