941 resultados para Google, String matching


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we first describe a framework to model the sponsored search auction on the web as a mechanism design problem. Using this framework, we describe two well-known mechanisms for sponsored search auction-Generalized Second Price (GSP) and Vickrey-Clarke-Groves (VCG). We then derive a new mechanism for sponsored search auction which we call optimal (OPT) mechanism. The OPT mechanism maximizes the search engine's expected revenue, while achieving Bayesian incentive compatibility and individual rationality of the advertisers. We then undertake a detailed comparative study of the mechanisms GSP, VCG, and OPT. We compute and compare the expected revenue earned by the search engine under the three mechanisms when the advertisers are symmetric and some special conditions are satisfied. We also compare the three mechanisms in terms of incentive compatibility, individual rationality, and computational complexity. Note to Practitioners-The advertiser-supported web site is one of the successful business models in the emerging web landscape. When an Internet user enters a keyword (i.e., a search phrase) into a search engine, the user gets back a page with results, containing the links most relevant to the query and also sponsored links, (also called paid advertisement links). When a sponsored link is clicked, the user is directed to the corresponding advertiser's web page. The advertiser pays the search engine in some appropriate manner for sending the user to its web page. Against every search performed by any user on any keyword, the search engine faces the problem of matching a set of advertisers to the sponsored slots. In addition, the search engine also needs to decide on a price to be charged to each advertiser. Due to increasing demands for Internet advertising space, most search engines currently use auction mechanisms for this purpose. These are called sponsored search auctions. A significant percentage of the revenue of Internet giants such as Google, Yahoo!, MSN, etc., comes from sponsored search auctions. In this paper, we study two auction mechanisms, GSP and VCG, which are quite popular in the sponsored auction context, and pursue the objective of designing a mechanism that is superior to these two mechanisms. In particular, we propose a new mechanism which we call the OPT mechanism. This mechanism maximizes the search engine's expected revenue subject to achieving Bayesian incentive compatibility and individual rationality. Bayesian incentive compatibility guarantees that it is optimal for each advertiser to bid his/her true value provided that all other agents also bid their respective true values. Individual rationality ensures that the agents participate voluntarily in the auction since they are assured of gaining a non-negative payoff by doing so.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For many, particularly in the Anglophone world and Western Europe, it may be obvious that Google has a monopoly over online search and advertising and that this is an undesirable state of affairs, due to Google's ability to mediate information flows online. The baffling question may be why governments and regulators are doing little to nothing about this situation, given the increasingly pivotal importance of the internet and free flowing communications in our lives. However, the law concerning monopolies, namely antitrust or competition law, works in what may be seen as a less intuitive way by the general public. Monopolies themselves are not illegal. Conduct that is unlawful, i.e. abuses of that market power, is defined by a complex set of rules and revolves principally around economic harm suffered due to anticompetitive behavior. However the effect of information monopolies over search, such as Google’s, is more than just economic, yet competition law does not address this. Furthermore, Google’s collection and analysis of user data and its portfolio of related services make it difficult for others to compete. Such a situation may also explain why Google’s established search rivals, Bing and Yahoo, have not managed to provide services that are as effective or popular as Google’s own (on this issue see also the texts by Dirk Lewandowski and Astrid Mager in this reader). Users, however, are not entirely powerless. Google's business model rests, at least partially, on them – especially the data collected about them. If they stop using Google, then Google is nothing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the properties of walls of marginal stability for BPS decays in a class of N = 2 theories. These theories arise in N = 2 string compactifications obtained as freely acting orbifolds of N = 4 theories, such theories include the STU model and the FHSV model. The cross sections of these walls for a generic decay in the axion-dilaton plane reduce to lines or circles. From the continuity properties of walls of marginal stability we show that central charges of BPS states do not vanish in the interior of the moduli space. Given a charge vector of a BPS state corresponding to a large black hole in these theories, we show that all walls of marginal stability intersect at the same point in the lower half of the axion-dilaton plane. We isolate a class of decays whose walls of marginal stability always lie in a region bounded by walls formed by decays to small black holes. This enables us to isolate a region in moduli space for which no decays occur within this class. We then study entropy enigma decays for such models and show that for generic values of the moduli, that is when moduli are of order one compared to the charges, entropy enigma decays do not occur in these models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

State-of-the-art image-set matching techniques typically implicitly model each image-set with a Gaussian distribution. Here, we propose to go beyond these representations and model image-sets as probability distribution functions (PDFs) using kernel density estimators. To compare and match image-sets, we exploit Csiszar´ f-divergences, which bear strong connections to the geodesic distance defined on the space of PDFs, i.e., the statistical manifold. Furthermore, we introduce valid positive definite kernels on the statistical manifold, which let us make use of more powerful classification schemes to match image-sets. Finally, we introduce a supervised dimensionality reduction technique that learns a latent space where f-divergences reflect the class labels of the data. Our experiments on diverse problems, such as video-based face recognition and dynamic texture classification, evidence the benefits of our approach over the state-of-the-art image-set matching methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a new method for establishing correlation between deuterium and its attached carbon in a deuterated liquid crystal. The method is based on transfer of polarization using the DAPT pulse sequence proposed originally for two spin half nuclei, now extended to a spin-1 and a spin-1/2 nuclei. DAPT utilizes the evolution of magnetization of the spin pair under two blocks of phase shifted BLEW-12 pulses on one of the spins separated by a 90 degree pulse on the other spin. The method is easy to implement and does not need to satisfy matching conditions unlike the Hartmann-Hahn cross-polarization. Experimental results presented demonstrate the efficacy of the method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present a new feature-based approach for mosaicing of camera-captured document images. A novel block-based scheme is employed to ensure that corners can be reliably detected over a wide range of images. 2-D discrete cosine transform is computed for image blocks defined around each of the detected corners and a small subset of the coefficients is used as a feature vector A 2-pass feature matching is performed to establish point correspondences from which the homography relating the input images could be computed. The algorithm is tested on a number of complex document images casually taken from a hand-held camera yielding convincing results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The legality of the operation of Google’s search engine, and its liability as an Internet intermediary, has been tested in various jurisdictions on various grounds. In Australia, there was an ultimately unsuccessful case against Google under the Australian Consumer Law relating to how it presents results from its search engine. Despite this failed claim, several complex issues were not adequately addressed in the case including whether Google sufficiently distinguishes between the different parts of its search results page, so as not to mislead or deceive consumers. This article seeks to address this question of consumer confusion by drawing on empirical survey evidence of Australian consumers’ understanding of Google’s search results layout. This evidence, the first of its kind in Australia, indicates some level of consumer confusion. The implications for future legal proceedings in against Google in Australia and in other jurisdictions are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new rotating beam finite element is developed in which the basis functions are obtained by the exact solution of the governing static homogenous differential equation of a stiff string, which results from an approximation in the rotating beam equation. These shape functions depend on rotation speed and element position along the beam and account for the centrifugal stiffening effect. Using this new element and the Hermite cubic finite element, a convergence study of natural frequencies is performed, and it is found that the new element converges much more rapidly than the conventional Hermite cubic element for the first two modes at higher rotation speeds. The new element is also applied for uniform and tapered rotating beams to determine the natural frequencies, and the results compare very well with the published results given in the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we examine multi-field inflationary models of the early Universe. Since non-Gaussianities may allow for the possibility to discriminate between models of inflation, we compute deviations from a Gaussian spectrum of primordial perturbations by extending the delta-N formalism. We use N-flation as a concrete model; our findings show that these models are generically indistinguishable as long as the slow roll approximation is still valid. Besides computing non-Guassinities, we also investigate Preheating after multi-field inflation. Within the framework of N-flation, we find that preheating via parametric resonance is suppressed, an indication that it is the old theory of preheating that is applicable. In addition to studying non-Gaussianities and preheatng in multi-field inflationary models, we study magnetogenesis in the early universe. To this aim, we propose a mechanism to generate primordial magnetic fields via rotating cosmic string loops. Magnetic fields in the micro-Gauss range have been observed in galaxies and clusters, but their origin has remained elusive. We consider a network of strings and find that rotating cosmic string loops, which are continuously produced in such networks, are viable candidates for magnetogenesis with relevant strength and length scales, provided we use a high string tension and an efficient dynamo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When augmented with the longest common prefix (LCP) array and some other structures, the suffix array can solve many string processing problems in optimal time and space. A compressed representation of the LCP array is also one of the main building blocks in many compressed suffix tree proposals. In this paper, we describe a new compressed LCP representation: the sampled LCP array. We show that when used with a compressed suffix array (CSA), the sampled LCP array often offers better time/space trade-offs than the existing alternatives. We also show how to construct the compressed representations of the LCP array directly from a CSA

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Equations proposed in previous work on the non-linear motion of a string show a basic disagreement, which is here traced to an assumption about the longitudinal displacement u. It is shown that it is neither necessary nor justifiable to assume that u is zero; and also that the velocity of propagation of u disturbances in a string is different from that in an infinite medium, although this difference is usually negligible. After formulating the exact equations of motion for the string, a systematic procedure is described for obtaining approximations to these equations to any order, making only the assumption that the strain in the material of the string is small. The lowest order equations in this scheme are non-linear, and are used to describe the response of a string near resonance. Finally, it is shown that in the absence of damping, planar motion of a string is always unstable at sufficiently high amplitudes, the critical amplitude falling to zero at the natural frequency and its subharmonics. The effect of slight damping on this instability is also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The core aim of machine learning is to make a computer program learn from the experience. Learning from data is usually defined as a task of learning regularities or patterns in data in order to extract useful information, or to learn the underlying concept. An important sub-field of machine learning is called multi-view learning where the task is to learn from multiple data sets or views describing the same underlying concept. A typical example of such scenario would be to study a biological concept using several biological measurements like gene expression, protein expression and metabolic profiles, or to classify web pages based on their content and the contents of their hyperlinks. In this thesis, novel problem formulations and methods for multi-view learning are presented. The contributions include a linear data fusion approach during exploratory data analysis, a new measure to evaluate different kinds of representations for textual data, and an extension of multi-view learning for novel scenarios where the correspondence of samples in the different views or data sets is not known in advance. In order to infer the one-to-one correspondence of samples between two views, a novel concept of multi-view matching is proposed. The matching algorithm is completely data-driven and is demonstrated in several applications such as matching of metabolites between humans and mice, and matching of sentences between documents in two languages.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis analyzes how matching takes place at the Finnish labor market from three different angles. The Finnish labor market has undergone severe structural changes following the economic crisis in the early 1990s. The labor market has had problems adjusting from these changes and hence a high and persistent unemployment has followed. In this thesis I analyze if matching problems, and in particular if changes in matching, can explain some of this persistence. The thesis consists of three essays. In the first essay Finnish Evidence of Changes in the Labor Market Matching Process the matching process at the Finnish labor market is analyzed. The key finding is that the matching process has changed thoroughly between the booming 1980s and the post-crisis period. The importance of the number of unemployed, and in particular long-term unemployed, for the matching process has vanished. More unemployed do not increase matching as theory predicts but rather the opposite. In the second essay, The Aggregate Matching Function and Directed Search -Finnish Evidence, stock-flow matching as a potential micro foundation of the aggregate matching function is studied. In the essay I show that newly unemployed match mainly with the stock of vacancies while longer term unemployed match with the inflow of vacancies. When aggregating I still find evidence of the traditional aggregate matching function. This could explain the huge support the aggregate matching function has received despite its odd randomness assumption. The third essay, How do Registered Job Seekers really match? -Finnish occupational level Evidence, studies matching for nine occupational groups and finds that very different matching problems exist for different occupations. In this essay also misspecification stemming from non-corresponding variables is dealt with through the introduction of a completely new set of variables. The new outflow measure used is vacancies filled with registered job seekers and it is matched by the supply side measure registered job seekers.