44 resultados para Search engines
em CentAUR: Central Archive University of Reading - UK
Resumo:
Search has become a hot topic in Internet computing, with rival search engines battling to become the de facto Web portal, harnessing search algorithms to wade through information on a scale undreamed of by early information retrieval (IR) pioneers. This article examines how search has matured from its roots in specialized IR systems to become a key foundation of the Web. The authors describe new challenges posed by the Web's scale, and show how search is changing the nature of the Web as much as the Web has changed the nature of search
Resumo:
This study examines the evolution of prices in markets with Internet price-comparison search engines. The empirical study analyzes laboratory data of prices available to informed consumers, for two industry sizes and two conditions on the sample (complete and incomplete). Distributions are typically bimodal. One of the two modes of distribution, corresponding to monopoly pricing, tends to attract such pricing strategies increasingly over time. The second one, corresponding to interior pricing, follows a decreasing trend. Monopoly pricing can serve as a means of insurance against more competitive (but riskier) behavior. In fact, experimental subjects who initially earn low profits due to interior pricing are more likely to switch to monopoly pricing than subjects who experience good returns from the start.
Resumo:
This article is concerned with the liability of search engines for algorithmically produced search suggestions, such as through Google’s ‘autocomplete’ function. Liability in this context may arise when automatically generated associations have an offensive or defamatory meaning, or may even induce infringement of intellectual property rights. The increasing number of cases that have been brought before courts all over the world puts forward questions on the conflict of fundamental freedoms of speech and access to information on the one hand, and personality rights of individuals— under a broader right of informational self-determination—on the other. In the light of the recent judgment of the Court of Justice of the European Union (EU) in Google Spain v AEPD, this article concludes that many requests for removal of suggestions including private individuals’ information will be successful on the basis of EU data protection law, even absent prejudice to the person concerned.
Resumo:
Search engines exploit the Web's hyperlink structure to help infer information content. The new phenomenon of personal Web logs, or 'blogs', encourage more extensive annotation of Web content. If their resulting link structures bias the Web crawling applications that search engines depend upon, there are implications for another form of annotation rapidly on the rise, the Semantic Web. We conducted a Web crawl of 160 000 pages in which the link structure of the Web is compared with that of several thousand blogs. Results show that the two link structures are significantly different. We analyse the differences and infer the likely effect upon the performance of existing and future Web agents. The Semantic Web offers new opportunities to navigate the Web, but Web agents should be designed to take advantage of the emerging link structures, or their effectiveness will diminish.
Resumo:
This article is concerned with the risks associated with the monopolisation of information that is available from a single source only. Although there is a longstanding consensus that sole-source databases should not receive protection under the EU Database Directive, and there are legislative provisions to ensure that lawful users have access to a database’s contents, Ryanair v PR Aviation challenges this assumption by affirming that the use of non-protected databases can be restricted by contract. Owners of non-protected databases can contractually exclude lawful users from taking the benefit of statutorily permitted uses, because such databases are not covered from the legislation that declares this kind of contract null and void. We argue that this judgment is not consistent with the legislative history and can have a profound impact on the functioning of the digital single market, where new information services, such as meta-search engines or price-comparison websites, base their operation on the systematic extraction and re-utilisation of materials available from online sources. This is an issue that the Commission should address in a forthcoming evaluation of the Database Directive.
Resumo:
Competency management is a very important part of a well-functioning organisation. Unfortunately competency descriptions are not uniformly specified nor defined across borders: National, sectorial or organisational, leading to an opaque competency description market with a multitude of competency frameworks and competency benchmarks. An ontology is a formalised description of a domain, which enables automated reasoning engines to be built which by utilising the interrelations between entities can make “intelligent” choices in different situations within the domain. Introducing formalised competency ontologies automated tools, such as skill gap analysis, training suggestion generation, job search and recruitment, can be developed, which compare and contrast different competency descriptions on the semantic level. The major problem with defining a common formalised ontology for competencies is that there are so many viewpoints of competencies and competency frameworks. Work within the TRACE project has focused on finding common trends within different competency frameworks in order to allow an intermediate competency description to be made, which other frameworks can reference. This research has shown that competencies can be divided up into “knowledge”, “skills” and what we call “others”. An ontology has been created based on this with a simple structure of different “kinds” of “knowledges” and “skills” using semantic interrelations to define the basic semantic structure of the ontology. A prototype tool for analysing a skill gap analysis has been developed. Personal profiles can be produced using the tool and a skill gap analysis is performed on a desired competency profile by using an ontologically based inference engine, which is able to list closest fit and possible proficiency gaps
Resumo:
Holocene tidal palaoechannels, Severn Estuary Levels, UK: a search for granulometric and foraminiferal criteria. Proceedings of the Geologists' Association, 117, 329-344. Grain-size characteristics (by laser granulometry) and foraminiferal assemblages have been established for silts accumulated in five, dissimilar tidal palaeochannels of mid or late Holocene age in the Severn Estuary Levels, representative of muddy tidal systems. For purposes of general comparison, similar data were obtained from a representative active tidal inlet in the area, but all of these channels have been subject to human interference and are not relied upon as a model for environmental interpretation. Although the palaeochannel deposits differ substantially in their bedding characteristics and stratigraphical relationships from the level-bedded salt-marsh platform and mudflat deposits with which they are associated, and although the channel environment is distinctive morphologically and hydraulically, no critical textural differences could be found between the channel deposits and the associated facies. Similarly, no foraminiferal assemblages distinctive of a tidal channel were encountered. Instead, the assemblages compare with those from mudflats and salt-marsh platforms. It is concluded that the sides of the subfossil channels carried some vegetation, as was observed to be the case in the modern inlet. An alternative approach is necessary if concealed palaeochannel deposits are to be recognized in muddy systems from limited numbers of subsurface samples. Although the palaeochannels afforded no characteristic textural signature, they yield transverse grain-size patterns pointing to coastal movements during their evolution. Concave-up trends suggest outward coastal building, whereas convex-up ones point to marsh-edge retreat.
Resumo:
Enhanced phytoextraction proposes the use of soil amendments to increase the heavy-metal content of above-ground harvestable plant tissues. This study compares the effect of synthetic aminopolycarboxylic acids [ethylenediamine tetraacetatic acid (EDTA), nitriloacetic acid (NTA), and diethylenetriamine pentaacetic acid (DTPA)] with a number of biodegradable, low-molecular weight, organic acids (citric acid, ascorbic acid, oxalic acid, salicylic acid, and NH4 acetate) as potential soil amendments for enhancing phytoextraction of heavy metals (Cu, Zn, Cd, Pb, and Ni) by Zea mays. The treatments in this study were applied at a dose of 2 mmol/kg(-1) 1 d before sowing. To compare possible effects between presow and postgermination treatments, a second smaller experiment was conducted in which EDTA, citric acid, and NH4 acetate were added 10 d after germination as opposed to 1 d before sowing. The soil used in this screening was a moderately contaminated topsoil derived from a dredged sediment disposal site. This site has been in an oxidized state for more than 8 years before being used in this research. The high carbonate, high organic matter, and high clay content characteristic to this type of sediment are thought to suppress heavy-metal phytoavailability. Both EDTA and DTPA resulted in increased levels of heavy metals in the above-ground biomass. However, the observed increases in uptake were not as large as reported in the literature. Neither the NTA nor organic acid treatments had any significant effect on uptake when applied prior to sowing. This was attributed to the rapid mineralization of these substances and the relatively low doses applied. The generally low extraction observed in this experiment restricts the use of phytoextraction as an effective remediation alternative under the current conditions, with regard to amendments used, applied dose (2 mmol/kg(-1) soil), application time (presow), plant species (Zea mays), and sediment (calcareous clayey soil) under study.
Resumo:
In this paper, we present a distributed computing framework for problems characterized by a highly irregular search tree, whereby no reliable workload prediction is available. The framework is based on a peer-to-peer computing environment and dynamic load balancing. The system allows for dynamic resource aggregation, does not depend on any specific meta-computing middleware and is suitable for large-scale, multi-domain, heterogeneous environments, such as computational Grids. Dynamic load balancing policies based on global statistics are known to provide optimal load balancing performance, while randomized techniques provide high scalability. The proposed method combines both advantages and adopts distributed job-pools and a randomized polling technique. The framework has been successfully adopted in a parallel search algorithm for subgraph mining and evaluated on a molecular compounds dataset. The parallel application has shown good calability and close-to linear speedup in a distributed network of workstations.
Resumo:
For fifty years, computer chess has pursued an original goal of Artificial Intelligence, to produce a chess-engine to compete at the highest level. The goal has arguably been achieved, but that success has made it harder to answer questions about the relative playing strengths of man and machine. The proposal here is to approach such questions in a counter-intuitive way, handicapping or stopping-down chess engines so that they play less well. The intrinsic lack of man-machine games may be side-stepped by analysing existing games to place computer engines as accurately as possible on the FIDE ELO scale of human play. Move-sequences may also be assessed for likelihood if computer-assisted cheating is suspected.
Resumo:
Our ability to identify, acquire, store, enquire on and analyse data is increasing as never before, especially in the GIS field. Technologies are becoming available to manage a wider variety of data and to make intelligent inferences on that data. The mainstream arrival of large-scale database engines is not far away. The experience of using the first such products tells us that they will radically change data management in the GIS field.