957 resultados para search engines


Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the growing size and variety of social media files on the web, it’s becoming critical to efficiently organize them into clusters for further processing. This paper presents a novel scalable constrained document clustering method that harnesses the power of search engines capable of dealing with large text data. Instead of calculating distance between the documents and all of the clusters’ centroids, a neighborhood of best cluster candidates is chosen using a document ranking scheme. To make the method faster and less memory dependable, the in-memory and in-database processing are combined in a semi-incremental manner. This method has been extensively tested in the social event detection application. Empirical analysis shows that the proposed method is efficient both in computation and memory usage while producing notable accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective To explore, in depth, the literature for evidence supporting asthma interventions delivered within primary schools and to identify any “gaps” in this research area. Methods A literature search using electronic search engines (i.e. Medline, PubMed, Education Resources Information Center (ERIC), International Pharmaceutical Abstracts (IPA), Cumulative Index to Nursing and Allied Health Literature (CINAHL), Embase and Informit) and the search terms “asthma”, “asthma intervention” and “school-based asthma education program” (and derivatives of these keywords) was conducted. Results Twenty-three articles met the inclusion criteria; of these eight were Randomised Controlled Trials. There was much variety in the type, content, delivery and outcome measures in these 23 studies. The most common intervention type was asthma education delivery. Most studies demonstrated improvement in clinical and humanistic markers, for example, asthma symptoms medication use (decrease in reliever medication use or decrease in the need for rescue oral steroid), inhaler use technique and spacer use competency, lung function and quality of life. Relatively few studies explored the effect of the intervention on academic outcomes. Most studies did not report on the sustainability or cost effectiveness of the intervention tested. Another drawback in the literature was the lack of details about the intervention and inconsistency in instruments selected for measuring outcomes. Conclusion School-based asthma interventions regardless of their heterogeneity have positive clinical, humanistic, health economical and academic outcomes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Katharine Hepburn’s entertaining portrayal of reference librarian Bunny Watson in Desk Set (1957) moves her character from apprehension about new technology to an understanding that it is simply another tool. This article outlines the impact of technology on academic legal research. It examines the nature of legal research and the doctrinal method, the importance of law libraries (and librarians) in legal research, and the roles and implications of the Internet and web search engines on legal research methods and education.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although the external influence of scholars has usually been approximated by publication and citation count, the array of scholarly activities is far more extensive. Today, new technologies, in particular Internet search engines, allow more accurate measurement of scholars' influence on societal discourse. Hence, in this article, we analyse the relation between the internal and external influence of 723 top economists using the number of pages indexed by Google and Bing as a measure of external influence. We not only identify a small association between these scholars’ internal and external influence but also a correlation between internal influence, as captured by receipt of such major academic awards as the Nobel Prize and John Bates Clark Medal, and the external prominence of the top 100 researchers (JEL Code: A11, A13, Z18).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The only effective and scalable way to regulate the actions of people on the internet is through online intermediaries. These are the institutions that facilitate communication: internet service providers, search engines, content hosts, and social networks. Governments, private firms, and civil society organisations are increasingly seeking to influence these intermediaries to take more responsibility to prevent or respond to IP infringements. Around the world, intermediaries are increasingly subject to a variety of obligations to help enforce IP rights, ranging from informal social and governmental pressure, to industry codes and private negotiated agreements, to formal legislative schemes. This paper provides an overview of this emerging shift in regulatory approaches, away from legal liability and towards increased responsibilities for intermediaries. This shift straddles two different potential futures: an optimistic set of more effective, more efficient mechanisms for regulating user behaviour, and a dystopian vision of rule by algorithm and private power, without the legitimising influence of the rule of law.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Australian Media Law details and explains the complex case law, legislation and regulations governing media practice in areas as diverse as journalism, advertising, multimedia and broadcasting. It examines the issues affecting traditional forms of media such as television, radio, film and newspapers as well as for recent forms such as the internet, online forums and digital technology, in a clear and accessible format. New additions to the fifth edition include: - the implications of new anti-terrorism legislation for journalists; - developments in privacy law, including Law Reform recommendations for a statutory cause of action to protect personal privacy in Australia and the expanding privacy jurisprudence in the United Kingdom and New Zealand; - liability for defamation of internet search engines and service providers; - the High Court decision in Roadshow v iiNet and the position of internet service providers in relation to copyright infringement via their services; - new suppression order regimes; - statutory reforms providing journalists with a rebuttable presumption of non-disclosure when called upon to reveal their sources in a court of law; - recent developments regarding whether journalists can use electronic devices to collect and disseminate information about court proceedings; - contempt committed by jurors via social media; and an examination of recent decisions on defamation, confidentiality, vilification, copyright and contempt.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper investigates the effect that text pre-processing approaches have on the estimation of the readability of web pages. Readability has been highlighted as an important aspect of web search result personalisation in previous work. The most widely used text readability measures rely on surface level characteristics of text, such as the length of words and sentences. We demonstrate that different tools for extracting text from web pages lead to very different estimations of readability. This has an important implication for search engines because search result personalisation strategies that consider users reading ability may fail if incorrect text readability estimations are computed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: The Internet has recently made possible the free global availability of scientific journal articles. Open Access (OA) can occur either via OA scientific journals, or via authors posting manuscripts of articles published in subscription journals in open web repositories. So far there have been few systematic studies showing how big the extent of OA is, in particular studies covering all fields of science. Methodology/Principal Findings: The proportion of peer reviewed scholarly journal articles, which are available openly in full text on the web, was studied using a random sample of 1837 titles and a web search engine. Of articles published in 2008, 8,5% were freely available at the publishers’ sites. For an additional 11,9% free manuscript versions could be found using search engines, making the overall OA percentage 20,4%. Chemistry (13%) had the lowest overall share of OA, Earth Sciences (33%) the highest. In medicine, biochemistry and chemistry publishing in OA journals was more common. In all other fields author-posted manuscript copies dominated the picture. Conclusions/Significance: The results show that OA already has a significant positive impact on the availability of the scientific journal literature and that there are big differences between scientific disciplines in the uptake. Due to the lack of awareness of OA-publishing among scientists in most fields outside physics, the results should be of general interest to all scholars. The results should also interest academic publishers, who need to take into account OA in their business strategies and copyright policies, as well as research funders, who like the NIH are starting to require OA availability of results from research projects they fund. The method and search tools developed also offer a good basis for more in-depth studies as well as longitudinal studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In pay-per click sponsored search auctions which are currently extensively used by search engines, the auction for a keyword involves a certain number of advertisers (say k) competing for available slots (say m) to display their ads. This auction is typically conducted for a number of rounds (say T). There are click probabilities mu_ij associated with agent-slot pairs. The search engine's goal is to maximize social welfare, for example, the sum of values of the advertisers. The search engine does not know the true value of an advertiser for a click to her ad and also does not know the click probabilities mu_ij s. A key problem for the search engine therefore is to learn these during the T rounds of the auction and also to ensure that the auction mechanism is truthful. Mechanisms for addressing such learning and incentives issues have recently been introduced and would be referred to as multi-armed-bandit (MAB) mechanisms. When m = 1,characterizations for truthful MAB mechanisms are available in the literature and it has been shown that the regret for such mechanisms will be O(T^{2/3}). In this paper, we seek to derive a characterization in the realistic but nontrivial general case when m > 1 and obtain several interesting results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Users can rarely reveal their information need in full detail to a search engine within 1--2 words, so search engines need to "hedge their bets" and present diverse results within the precious 10 response slots. Diversity in ranking is of much recent interest. Most existing solutions estimate the marginal utility of an item given a set of items already in the response, and then use variants of greedy set cover. Others design graphs with the items as nodes and choose diverse items based on visit rates (PageRank). Here we introduce a radically new and natural formulation of diversity as finding centers in resistive graphs. Unlike in PageRank, we do not specify the edge resistances (equivalently, conductances) and ask for node visit rates. Instead, we look for a sparse set of center nodes so that the effective conductance from the center to the rest of the graph has maximum entropy. We give a cogent semantic justification for turning PageRank thus on its head. In marked deviation from prior work, our edge resistances are learnt from training data. Inference and learning are NP-hard, but we give practical solutions. In extensive experiments with subtopic retrieval, social network search, and document summarization, our approach convincingly surpasses recently-published diversity algorithms like subtopic cover, max-marginal relevance (MMR), Grasshopper, DivRank, and SVMdiv.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The adaptation of traditional newspapers to new digital media and its interface, far from being a mere technical transformation, has contributed to a gradual change in the media themselves and their audiences. With a sample including the top general information pay newspaper in each of the 28 countries of the European Union, this research has carried out an analysis using 17 indicators divided in 4 categories. The aim is to identify the transformations that the implementation of digital media have brought to the top European newspapers. In general terms, the results show that most dailies have managed to keep their leadership also in online environment. Moreover, an emerging group of global media is growing up, based in preexisting national media. Digital and mobile media have contributed to the appearance of new consumption habits as well, where users read more superficially and sporadically. The audience uses several formats at a time, and digital devices already bring the biggest amount of users to many media. The Internet-created new information windows –search engines, social networks, etc. –are also contributing to the change in professional work routines.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Over a century of fi shery and oceanographic research conducted along the Atlantic coast of the United States has resulted in many publications using unofficial, and therefore unclear, geographic names for certain study areas. Such improper usage, besides being unscholarly, has and can lead to identification problems for readers unfamiliar with the area. Even worse, the use of electronic data bases and search engines can provide incomplete or confusing references when improper wording is used. The two terms used improperly most often are “Middle Atlantic Bight” and “South Atlantic Bight.” In general, the term “Middle Atlantic Bight” usually refers to an imprecise coastal area off the middle Atlantic states of New York, New Jersey, Delaware, Maryland, and Virginia, and the term “South Atlantic Bight” refers to the area off the southeastern states of North Carolina, South Carolina, Georgia, and Florida’s east coast.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ideally, one would like to perform image search using an intuitive and friendly approach. Many existing image search engines, however, present users with sets of images arranged in some default order on the screen, typically the relevance to a query, only. While this certainly has its advantages, arguably, a more flexible and intuitive way would be to sort images into arbitrary structures such as grids, hierarchies, or spheres so that images that are visually or semantically alike are placed together. This paper focuses on designing such a navigation system for image browsers. This is a challenging task because arbitrary layout structure makes it difficult - if not impossible - to compute cross-similarities between images and structure coordinates, the main ingredient of traditional layouting approaches. For this reason, we resort to a recently developed machine learning technique: kernelized sorting. It is a general technique for matching pairs of objects from different domains without requiring cross-domain similarity measures and hence elegantly allows sorting images into arbitrary structures. Moreover, we extend it so that some images can be preselected for instance forming the tip of the hierarchy allowing to subsequently navigate through the search results in the lower levels in an intuitive way. Copyright 2010 ACM.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The discipline of Artificial Intelligence (AI) was born in the summer of 1956 at Dartmouth College in Hanover, New Hampshire. Half of a century has passed, and AI has turned into an important field whose influence on our daily lives can hardly be overestimated. The original view of intelligence as a computer program - a set of algorithms to process symbols - has led to many useful applications now found in internet search engines, voice recognition software, cars, home appliances, and consumer electronics, but it has not yet contributed significantly to our understanding of natural forms of intelligence. Since the 1980s, AI has expanded into a broader study of the interaction between the body, brain, and environment, and how intelligence emerges from such interaction. This advent of embodiment has provided an entirely new way of thinking that goes well beyond artificial intelligence proper, to include the study of intelligent action in agents other than organisms or robots. For example, it supplies powerful metaphors for viewing corporations, groups of agents, and networked embedded devices as intelligent and adaptive systems acting in highly uncertain and unpredictable environments. In addition to giving us a novel outlook on information technology in general, this broader view of AI also offers unexpected perspectives into how to think about ourselves and the world around us. In this chapter, we briefly review the turbulent history of AI research, point to some of its current trends, and to challenges that the AI of the 21st century will have to face. © Springer-Verlag Berlin Heidelberg 2007.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Urquhart,C., Thomas, R., Spink, S., Fenton, R., Yeoman, A., Lonsdale, R., Armstrong, C., Banwell, L., Ray, K., Coulson, G. & Rowley, J. (2005). Student use of electronic information services in further education. International Journal of Information Management, 25(4), 347-362. Sponsorship: JISC