37 resultados para google book search
em Queensland University of Technology - ePrints Archive
Resumo:
In 2005, the Association of American Publishers (AAP) and the Authors Guild (AG) sued Google for ‘massive copyright infringement’ for the mass digitization of books for the Google Book Search Project. In 2008, the parties reached a settlement, pending court approval. If approved, the settlement could have far-reaching consequences for authors, libraries, educational institutions and the reading public. In this article, I provide an overview of the Google Book Search Settlement. Firstly, I explain the Google Book Search Project, the legal questions raised by the Project and the lawsuit brought against Google. Secondly, I examine the terms of the Settlement Agreement, including what rights were granted between the parties and what rights were granted to the general public. Finally, I consider the implications of the settlement for Australia. The Settlement Agreement, and consequently the broader scope of the Google Book Search Project, is currently limited to the United States. In this article I consider whether the Project could be extended to Australia at a later date, how Google might go about doing this, and the implications of such an extension under the Copyright Act 1968 (Cth). I argue that without prior agreements with rightholders, our limited exceptions to copyright infringement mean that Google is unlikely to be able to extend the full scope of the Project to Australia without infringing copyright.
Resumo:
The legality of the operation of Google’s search engine, and its liability as an Internet intermediary, has been tested in various jurisdictions on various grounds. In Australia, there was an ultimately unsuccessful case against Google under the Australian Consumer Law relating to how it presents results from its search engine. Despite this failed claim, several complex issues were not adequately addressed in the case including whether Google sufficiently distinguishes between the different parts of its search results page, so as not to mislead or deceive consumers. This article seeks to address this question of consumer confusion by drawing on empirical survey evidence of Australian consumers’ understanding of Google’s search results layout. This evidence, the first of its kind in Australia, indicates some level of consumer confusion. The implications for future legal proceedings in against Google in Australia and in other jurisdictions are discussed.
Resumo:
This book documents and evaluates the growing consumer revolution against digital copyright law, and makes a unique theoretical contribution to the debate surrounding this issue. With a focus on recent US copyright law, the book charts the consumer rebellion against the Sonny Bono Copyright Term Extension Act 1998 (US) and the Digital Millennium Copyright Act 1998 (US). The author explores the significance of key judicial rulings and considers legal controversies over new technologies, such as the iPod, TiVo, Sony Playstation II, Google Book Search, and peer-to-peer networks. The book also highlights cultural developments, such as the emergence of digital sampling and mash-ups, the construction of the BBC Creative Archive, and the evolution of the Creative Commons. Digital Copyright and the Consumer Revolution will be of prime interest to academics, law students and lawyers interested in the ramifications of copyright law, as well as policymakers given its focus upon recent legislative developments and reform proposals. The book will also appeal to librarians, information managers, creative artists, consumers, technology developers, and other users of copyright material.
Resumo:
INEX investigates focused retrieval from structured documents by providing large test collections of structured documents, uniform evaluation measures, and a forum for organizations to compare their results. This paper reports on the INEX 2011 evaluation campaign, which consisted of a five active tracks: Books and Social Search, Data Centric, Question Answering, Relevance Feedback, and Snippet Retrieval. INEX 2011 saw a range of new tasks and tracks, such as Social Book Search, Faceted Search, Snippet Retrieval, and Tweet Contextualization.
Resumo:
INEX investigates focused retrieval from structured documents by providing large test collections of structured documents, uniform evaluation measures, and a forum for organizations to compare their results. This paper reports on the INEX 2013 evaluation campaign, which consisted of four activities addressing three themes: searching professional and user generated data (Social Book Search track); searching structured or semantic data (Linked Data track); and focused retrieval (Snippet Retrieval and Tweet Contextualization tracks). INEX 2013 was an exciting year for INEX in which we consolidated the collaboration with (other activities in) CLEF and for the second time ran our workshop as part of the CLEF labs in order to facilitate knowledge transfer between the evaluation forums. This paper gives an overview of all the INEX 2013 tracks, their aims and task, the built test-collections, and gives an initial analysis of the results
Resumo:
INEX investigates focused retrieval from structured documents by providing large test collections of structured documents, uniform evaluation measures, and a forum for organizations to compare their results. This paper reports on the INEX 2013 evaluation campaign, which consisted of four activities addressing three themes: searching professional and user generated data (Social Book Search track); searching structured or semantic data (Linked Data track); and focused retrieval (Snippet Retrieval and Tweet Contextualization tracks). INEX 2013 was an exciting year for INEX in which we consolidated the collaboration with (other activities in) CLEF and for the second time ran our workshop as part of the CLEF labs in order to facilitate knowledge transfer between the evaluation forums. This paper gives an overview of all the INEX 2013 tracks, their aims and task, the built test-collections, and gives an initial analysis of the results.
Resumo:
INEX investigates focused retrieval from structured documents by providing large test collections of structured documents, uniform evaluation measures, and a forum for organizations to compare their results. This paper reports on the INEX 2014 evaluation campaign, which consisted of three tracks: The Interactive Social Book Search Track investigated user information seeking behavior when interacting with various sources of information, for realistic task scenarios, and how the user interface impacts search and the search experience. The Social Book Search Track investigated the relative value of authoritative metadata and user-generated content for search and recommendation using a test collection with data from Amazon and LibraryThing, including user profiles and personal catalogues. The Tweet Contextualization Track investigated tweet contextualization, helping a user to understand a tweet by providing him with a short background summary generated from relevant Wikipedia passages aggregated into a coherent summary. INEX 2014 was an exciting year for INEX in which we for the third time ran our workshop as part of the CLEF labs. This paper gives an overview of all the INEX 2014 tracks, their aims and task, the built test-collections, the participants, and gives an initial analysis of the results.
Resumo:
INEX investigates focused retrieval from structured documents by providing large test collections of structured documents, uniform evaluation measures, and a forum for organizations to compare their results. This paper reports on the INEX'12 evaluation campaign, which consisted of a five tracks: Linked Data, Relevance Feedback, Snippet Retrieval, Social Book Search, and Tweet Contextualization. INEX'12 was an exciting year for INEX in which we joined forces with CLEF and for the first time ran our workshop as part of the CLEF labs in order to facilitate knowledge transfer between the evaluation forums.
Resumo:
AIMS: The Framework Convention on Tobacco Control (FCTC) requires nations that have ratified the convention to ban all tobacco advertising and promotion. In the face of these restrictions, tobacco packaging has become the key promotional vehicle for the tobacco industry to interest smokers and potential smokers in tobacco products. This paper reviews available research into the probable impact of mandatory plain packaging and internal tobacco industry statements about the importance of packs as promotional vehicles. It critiques legal objections raised by the industry about plain packaging violating laws and international trade agreements. METHODS: Searches for available evidence were conducted within the internal tobacco industry documents through the online document archives; tobacco industry trade publications; research literature through the Medline and Business Source Premier databases; and grey literature including government documents, research reports and non-governmental organization papers via the Google internet search engine. RESULTS: Plain packaging of all tobacco products would remove a key remaining means for the industry to promote its products to billions of the world's smokers and future smokers. Governments have required large surface areas of tobacco packs to be used exclusively for health warnings without legal impediment or need to compensate tobacco companies. CONCLUSIONS: Requiring plain packaging is consistent with the intention to ban all tobacco promotions. There is no impediment in the FCTC to interpreting tobacco advertising and promotion to include tobacco packs.
Resumo:
For many, particularly in the Anglophone world and Western Europe, it may be obvious that Google has a monopoly over online search and advertising and that this is an undesirable state of affairs, due to Google's ability to mediate information flows online. The baffling question may be why governments and regulators are doing little to nothing about this situation, given the increasingly pivotal importance of the internet and free flowing communications in our lives. However, the law concerning monopolies, namely antitrust or competition law, works in what may be seen as a less intuitive way by the general public. Monopolies themselves are not illegal. Conduct that is unlawful, i.e. abuses of that market power, is defined by a complex set of rules and revolves principally around economic harm suffered due to anticompetitive behavior. However the effect of information monopolies over search, such as Google’s, is more than just economic, yet competition law does not address this. Furthermore, Google’s collection and analysis of user data and its portfolio of related services make it difficult for others to compete. Such a situation may also explain why Google’s established search rivals, Bing and Yahoo, have not managed to provide services that are as effective or popular as Google’s own (on this issue see also the texts by Dirk Lewandowski and Astrid Mager in this reader). Users, however, are not entirely powerless. Google's business model rests, at least partially, on them – especially the data collected about them. If they stop using Google, then Google is nothing.
Resumo:
In 2008, a collaborative partnership between Google and academia launched the Google Online Marketing Challenge (hereinafter Google Challenge), perhaps the world’s largest in-class competition for higher education students. In just two years, almost 20,000 students from 58 countries participated in the Google Challenge. The Challenge gives undergraduate and graduate students hands-on experience with the world’s fastest growing advertising mechanism, search engine advertising. Funded by Google, students develop an advertising campaign for a small to medium sized enterprise and manage the campaign over three consecutive weeks using the Google AdWords platform. This article explores the Challenge as an innovative pedagogical tool for marketing educators. Based on the experiences of three instructors in Australia, Canada and the United States, this case study discusses the opportunities and challenges of integrating this dynamic problem-based learning approach into the classroom.
Resumo:
Search engines have forever changed the way people access and discover knowledge, allowing information about almost any subject to be quickly and easily retrieved within seconds. As increasingly more material becomes available electronically the influence of search engines on our lives will continue to grow. This presents the problem of how to find what information is contained in each search engine, what bias a search engine may have, and how to select the best search engine for a particular information need. This research introduces a new method, search engine content analysis, in order to solve the above problem. Search engine content analysis is a new development of traditional information retrieval field called collection selection, which deals with general information repositories. Current research in collection selection relies on full access to the collection or estimations of the size of the collections. Also collection descriptions are often represented as term occurrence statistics. An automatic ontology learning method is developed for the search engine content analysis, which trains an ontology with world knowledge of hundreds of different subjects in a multilevel taxonomy. This ontology is then mined to find important classification rules, and these rules are used to perform an extensive analysis of the content of the largest general purpose Internet search engines in use today. Instead of representing collections as a set of terms, which commonly occurs in collection selection, they are represented as a set of subjects, leading to a more robust representation of information and a decrease of synonymy. The ontology based method was compared with ReDDE (Relevant Document Distribution Estimation method for resource selection) using the standard R-value metric, with encouraging results. ReDDE is the current state of the art collection selection method which relies on collection size estimation. The method was also used to analyse the content of the most popular search engines in use today, including Google and Yahoo. In addition several specialist search engines such as Pubmed and the U.S. Department of Agriculture were analysed. In conclusion, this research shows that the ontology based method mitigates the need for collection size estimation.
Resumo:
We investigate whether the two 2 zero cost portfolios, SMB and HML, have the ability to predict economic growth for markets investigated in this paper. Our findings show that there are only a limited number of cases when the coefficients are positive and significance is achieved in an even more limited number of cases. Our results are in stark contrast to Liew and Vassalou (2000) who find coefficients to be generally positive and of a similar magnitude. We go a step further and also employ the methodology of Lakonishok, Shleifer and Vishny (1994) and once again fail to support the risk-based hypothesis of Liew and Vassalou (2000). In sum, we argue that search for a robust economic explanation for firm size and book-to-market equity effects needs sustained effort as these two zero cost portfolios do not represent economically relevant risk.