923 resultados para E-Commerce, Web Search Engines
Resumo:
Models are becoming increasingly important in the software development process. As a consequence, the number of models being used is increasing, and so is the need for efficient mechanisms to search them. Various existing search engines could be used for this purpose, but they lack features to properly search models, mainly because they are strongly focused on text-based search. This paper presents Moogle, a model search engine that uses metamodeling information to create richer search indexes and to allow more complex queries to be performed. The paper also presents the results of an evaluation of Moogle, which showed that the metamodel information improves the accuracy of the search.
Resumo:
Internet Service Providers’ liability for copyright infringement is a debated issue in France and Belgium, particularly with respect to intermediaries such as providers of hyperlinks and location tool services for which the e-commerce directive does not set explicitly any exemption from liability. Thus, the question arises among other things whether the safe harbour provisions provided for in respect of caching and hosting also could apply to search engines. French and Belgian Courts had recently to decide on this issue in several cases concerning Google’s complementary tools such as Google Videos, Google Images, Google Suggest and Google News. This article seeks to give a summary of and to assess this recent case law.
Quality evaluation of the available Internet information regarding pain during orthodontic treatment
Resumo:
OBJECTIVE To investigate the quality of the data disseminated via the Internet regarding pain experienced by orthodontic patients. MATERIALS AND METHODS A systematic online search was performed for 'orthodontic pain' and 'braces pain' separately using five search engines. The first 25 results from each search term-engine combination were pooled for analysis. After excluding advertising sites, discussion groups, video feeds, and links to scientific articles, 25 Web pages were evaluated in terms of accuracy, readability, accessibility, usability, and reliability using recommended research methodology; reference textbook material, the Flesch Reading Ease Score; and the LIDA instrument. Author and information details were also recorded. RESULTS Overall, the results indicated a variable quality of the available informational material. Although the readability of the Web sites was generally acceptable, the individual LIDA categories were rated of medium or low quality, with average scores ranging from 16.9% to 86.2%. The orthodontic relevance of the Web sites was not accompanied by the highest assessment results, and vice versa. CONCLUSIONS The quality of the orthodontic pain information cited by Web sources appears to be highly variable. Further structural development of health information technology along with public referral to reliable sources by specialists are recommended.
Resumo:
Specialized search engines such as PubMed, MedScape or Cochrane have increased dramatically the visibility of biomedical scientific results. These web-based tools allow physicians to access scientific papers instantly. However, this decisive improvement had not a proportional impact in clinical practice due to the lack of advanced search methods. Even queries highly specified for a concrete pathology frequently retrieve too many information, with publications related to patients treated by the physician beyond the scope of the results examined. In this work we present a new method to improve scientific article search using patient information. Two pathologies have been used within the project to retrieve relevant literature to patient data and to be integrated with other sources. Promising results suggest the suitability of the approach, highlighting publications dealing with patient features and facilitating literature search to physicians.
Resumo:
Evaluating and measuring the pedagogical quality of Learning Objects is essential for achieving a successful web-based education. On one hand, teachers need some assurance of quality of the teaching resources before making them part of the curriculum. On the other hand, Learning Object Repositories need to include quality information into the ranking metrics used by the search engines in order to save users time when searching. For these reasons, several models such as LORI (Learning Object Review Instrument) have been proposed to evaluate Learning Object quality from a pedagogical perspective. However, no much effort has been put in defining and evaluating quality metrics based on those models. This paper proposes and evaluates a set of pedagogical quality metrics based on LORI. The work exposed in this paper shows that these metrics can be effectively and reliably used to provide quality-based sorting of search results. Besides, it strongly evidences that the evaluation of Learning Objects from a pedagogical perspective can notably enhance Learning Object search if suitable evaluations models and quality metrics are used. An evaluation of the LORI model is also described. Finally, all the presented metrics are compared and a discussion on their weaknesses and strengths is provided.
Resumo:
The Protein Information Resource, in collaboration with the Munich Information Center for Protein Sequences (MIPS) and the Japan International Protein Information Database (JIPID), produces the most comprehensive and expertly annotated protein sequence database in the public domain, the PIR-International Protein Sequence Database. To provide timely and high quality annotation and promote database interoperability, the PIR-International employs rule-based and classification-driven procedures based on controlled vocabulary and standard nomenclature and includes status tags to distinguish experimentally determined from predicted protein features. The database contains about 200 000 non-redundant protein sequences, which are classified into families and superfamilies and their domains and motifs identified. Entries are extensively cross-referenced to other sequence, classification, genome, structure and activity databases. The PIR web site features search engines that use sequence similarity and database annotation to facilitate the analysis and functional identification of proteins. The PIR-International databases and search tools are accessible on the PIR web site at http://pir.georgetown.edu/ and at the MIPS web site at http://www.mips.biochem.mpg.de. The PIR-International Protein Sequence Database and other files are also available by FTP.
Resumo:
A anotação geográfica de documentos consiste na adoção de metadados para a identificação de nomes de locais e a posição de suas ocorrências no texto. Esta informação é útil, por exemplo, para mecanismos de busca. A partir dos topônimos mencionados no texto é possível identificar o contexto espacial em que o assunto do texto está inserido, o que permite agrupar documentos que se refiram a um mesmo contexto, atribuindo ao documento um escopo geográfico. Esta Dissertação de Mestrado apresenta um novo método, batizado de Geofier, para determinação do escopo geográfico de documentos. A novidade apresentada pelo Geofier é a possibilidade da identificação do escopo geográfico de um documento por meio de classificadores de aprendizagem de máquina treinados sem o uso de um gazetteer e sem premissas quanto à língua dos textos analisados. A Wikipédia foi utilizada como fonte de um conjunto de documentos anotados geograficamente para o treinamento de uma hierarquia de Classificadores Naive Bayes e Support Vector Machines (SVMs). Uma comparação de desempenho entre o Geofier e uma reimplementação do sistema Web-a-Where foi realizada em relação à determinação do escopo geográfico dos textos da Wikipédia. A hierarquia do Geofier foi treinada e avaliada de duas formas: usando topônimos do mesmo gazetteer que o Web-a-Where e usando n-gramas extraídos dos documentos de treinamento. Como resultado, o Geofier manteve desempenho superior ao obtido pela reimplementação do Web-a-Where.
Resumo:
This paper provides an overview of a case study research that investigated the use of Digital Library (DL) resources in two undergraduate classes and explored faculty and students’ perceptions of educational digital libraries. This study found that students and faculty use academic DLs primarily for textual resources, but turn to the open Web for visual and multimedia resources. The study participants did not perceive academic libraries as a useful source of digital images and used search engines when searching for visual resources. The limited use of digital library resources for teaching and learning is associated with perceptions of usefulness and ease of use, especially if considered in a broader information landscape, in conjunction with other library information systems, and in the context of Web resources. The limited use of digital libraries is related to the following perceptions: 1) Library systems are not viewed as user-friendly, which in turn discourages potential users from trying DLs provided by academic libraries; 2) Academic libraries are perceived as places of primarily textual resources; perceptions of usefulness, especially in regard to relevance of content, coverage, and currency, seem to have a negative effect on user intention to use DLs, especially when searching for visual materials.
Resumo:
When they look at Internet policy, EU policymakers seem mesmerised, if not bewitched, by the word ‘neutrality’. Originally confined to the infrastructure layer, today the neutrality rhetoric is being expanded to multi-sided platforms such as search engines and more generally online intermediaries. Policies for search neutrality and platform neutrality are invoked to pursue a variety of policy objectives, encompassing competition, consumer protection, privacy and media pluralism. This paper analyses this emerging debate and comes to a number of conclusions. First, mandating net neutrality at the infrastructure layer might have some merit, but it certainly would not make the Internet neutral. Second, since most of the objectives initially associated with network neutrality cannot be realistically achieved by such a rule, the case for network neutrality legislation would have to stand on different grounds. Third, the fact that the Internet is not neutral is mostly a good thing for end users, who benefit from intermediaries that provide them with a selection of the over-abundant information available on the Web. Fourth, search neutrality and platform neutrality are fundamentally flawed principles that contradict the economics of the Internet. Fifth, neutrality is a very poor and ineffective recipe for media pluralism, and as such should not be invoked as the basis of future media policy. All these conclusions have important consequences for the debate on the future EU policy for the Digital Single Market.
Resumo:
The role of gender differences in the consumption of goods and services is well established in many areas of consumer behaviour and computer use and yet there has been only limited research into such gender-based differences in the information search behaviour of Internet users. This paper reports the gender-based results of an exploratory study of consumer external information search of the web. The study investigated consumer characteristics, web search behaviour, and the post web search outcomes of purchase decision status and consumer judgements of search usefulness and satisfaction. Gender-based differences are reported in all three areas. Consideration of the results suggests they are issues which could inhibit the adoption of online purchasing by female web users. The implications of these results are discussed and a future research agenda proposed.
Resumo:
Ontology search and reuse is becoming increasingly important as the quest for methods to reduce the cost of constructing such knowledge structures continues. A number of ontology libraries and search engines are coming to existence to facilitate locating and retrieving potentially relevant ontologies. The number of ontologies available for reuse is steadily growing, and so is the need for methods to evaluate and rank existing ontologies in terms of their relevance to the needs of the knowledge engineer. This paper presents AKTiveRank, a prototype system for ranking ontologies based on a number of structural metrics.
Resumo:
Representing knowledge using domain ontologies has shown to be a useful mechanism and format for managing and exchanging information. Due to the difficulty and cost of building ontologies, a number of ontology libraries and search engines are coming to existence to facilitate reusing such knowledge structures. The need for ontology ranking techniques is becoming crucial as the number of ontologies available for reuse is continuing to grow. In this paper we present AKTiveRank, a prototype system for ranking ontologies based on the analysis of their structures. We describe the metrics used in the ranking system and present an experiment on ranking ontologies returned by a popular search engine for an example query.
Resumo:
For a submitted query to multiple search engines finding relevant results is an important task. This paper formulates the problem of aggregation and ranking of multiple search engines results in the form of a minimax linear programming model. Besides the novel application, this study detects the most relevant information among a return set of ranked lists of documents retrieved by distinct search engines. Furthermore, two numerical examples aree used to illustrate the usefulness of the proposed approach.
Resumo:
This paper summarizes the scientific work presented at the 32nd European Conference on Information Retrieval. It demonstrates that information retrieval (IR) as a research area continues to thrive with progress being made in three complementary sub-fields, namely IR theory and formal methods together with indexing and query representation issues, furthermore Web IR as a primary application area and finally research into evaluation methods and metrics. It is the combination of these areas that gives IR its solid scientific foundations. The paper also illustrates that significant progress has been made in other areas of IR. The keynote speakers addressed three such subject fields, social search engines using personalization and recommendation technologies, the renewed interest in applying natural language processing to IR, and multimedia IR as another fast-growing area.
Resumo:
This work contributes to the development of search engines that self-adapt their size in response to fluctuations in workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computational resources to or from the engine. In this paper, we focus on the problem of regrouping the metric-space search index when the number of virtual machines used to run the search engine is modified to reflect changes in workload. We propose an algorithm for incrementally adjusting the index to fit the varying number of virtual machines. We tested its performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud, while calibrating the results to compensate for the performance fluctuations of the platform. Our experiments show that, when compared with computing the index from scratch, the incremental algorithm speeds up the index computation 2–10 times while maintaining a similar search performance.