962 resultados para Web, Search Engine, Overlap
Resumo:
Introducción: Analizar la calidad de las páginas web de los servicios de catering en el ámbito escolar y su contenido en educación alimentaria, y tener una primera experiencia con la herramienta de evaluación EDALCAT. Material y métodos: Estudio descriptivo transversal. La población de estudio son páginas web de empresas de catering encargadas de la gestión de los comedores escolares. La muestra se obtuvo utilizando el buscador Google y un Ranking de las principales empresas de catering por facturación, escogiendo aquellas que tenían página web. Para la prueba piloto se seleccionaron diez páginas web según proximidad geográfica a la ciudad de Alicante y nivel de facturación. Para la evaluación de los sitios web se diseñó un cuestionario (EDALCAT), compuesto de un primer bloque de predictores de calidad con 19 variables de fiabilidad, diseño y navegación; y de un segundo bloque de contenidos específicos de educación alimentaria con 19 variables de contenido y actividades educativas. Resultados: Se han obtenido resultados positivos en 31 de las 38 variables del cuestionario, excepto en los ítems: “Buscador”, “Idioma” (40%) y “Ayuda” (10%) del bloque predictores de calidad y en los ítems: “Talleres”, “Recetario”, “Web alimentación-nutrición” (40%) y “Ejemplos” (30%) del bloque de contenidos específicos de educación alimentaria. Todas las páginas web evaluadas superan valores del 50% de cumplimiento de criterios de calidad y de contenidos mínimos en educación alimentaria, y sólo una de ellas, incumple el nivel de actividad mínimo establecido. Conclusiones: Los predictores de calidad y los contenidos específicos en educación alimentaria dieron buenos resultados en todas las páginas web evaluadas. La mayoría de ellas obtuvieron una alta puntuación en su valoración, y en su análisis individual por bloques. Tras el estudio piloto el cuestionario se ha modificado y se obtiene el EDALCAT definitivo. En líneas generales EDALCAT parece ser adecuado para evaluar la calidad de las páginas web de servicios de catering y su contenido en educación alimentaria, sin embargo el presente estudio no puede considerarse como validación del mismo.
Resumo:
A location-based search engine must be able to find and assign proper locations to Web resources. Host, content and metadata location information are not sufficient to describe the location of resources as they are ambiguous or unavailable for many documents. We introduce target location as the location of users of Web resources. Target location is content-independent and can be applied to all types of Web resources. A novel method is introduced which uses log files and IN to track the visitors of websites. The experiments show that target location can be calculated for almost all documents on the Web at country level and to the majority of them in state and city levels. It can be assigned to Web resources as a new definition and dimension of location. It can be used separately or with other relevant locations to define the geography of Web resources. This compensates insufficient geographical information on Web resources and would facilitate the design and development of location-based search engines.
Resumo:
When a query is passed to multiple search engines, each search engine returns a ranked list of documents. Researchers have demonstrated that combining results, in the form of a "metasearch engine", produces a significant improvement in coverage and search effectiveness. This paper proposes a linear programming mathematical model for optimizing the ranked list result of a given group of Web search engines for an issued query. An application with a numerical illustration shows the advantages of the proposed method. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
As the Web evolves unexpectedly fast, information grows explosively. Useful resources become more and more difficult to find because of their dynamic and unstructured characteristics. A vertical search engine is designed and implemented towards a specific domain. Instead of processing the giant volume of miscellaneous information distributed in the Web, a vertical search engine targets at identifying relevant information in specific domains or topics and eventually provides users with up-to-date information, highly focused insights and actionable knowledge representation. As the mobile device gets more popular, the nature of the search is changing. So, acquiring information on a mobile device poses unique requirements on traditional search engines, which will potentially change every feature they used to have. To summarize, users are strongly expecting search engines that can satisfy their individual information needs, adapt their current situation, and present highly personalized search results. ^ In my research, the next generation vertical search engine means to utilize and enrich existing domain information to close the loop of vertical search engine's system that mutually facilitate knowledge discovering, actionable information extraction, and user interests modeling and recommendation. I investigate three problems in which domain taxonomy plays an important role, including taxonomy generation using a vertical search engine, actionable information extraction based on domain taxonomy, and the use of ensemble taxonomy to catch user's interests. As the fundamental theory, ultra-metric, dendrogram, and hierarchical clustering are intensively discussed. Methods on taxonomy generation using my research on hierarchical clustering are developed. The related vertical search engine techniques are practically used in Disaster Management Domain. Especially, three disaster information management systems are developed and represented as real use cases of my research work.^
Resumo:
As the Web evolves unexpectedly fast, information grows explosively. Useful resources become more and more difficult to find because of their dynamic and unstructured characteristics. A vertical search engine is designed and implemented towards a specific domain. Instead of processing the giant volume of miscellaneous information distributed in the Web, a vertical search engine targets at identifying relevant information in specific domains or topics and eventually provides users with up-to-date information, highly focused insights and actionable knowledge representation. As the mobile device gets more popular, the nature of the search is changing. So, acquiring information on a mobile device poses unique requirements on traditional search engines, which will potentially change every feature they used to have. To summarize, users are strongly expecting search engines that can satisfy their individual information needs, adapt their current situation, and present highly personalized search results. In my research, the next generation vertical search engine means to utilize and enrich existing domain information to close the loop of vertical search engine's system that mutually facilitate knowledge discovering, actionable information extraction, and user interests modeling and recommendation. I investigate three problems in which domain taxonomy plays an important role, including taxonomy generation using a vertical search engine, actionable information extraction based on domain taxonomy, and the use of ensemble taxonomy to catch user's interests. As the fundamental theory, ultra-metric, dendrogram, and hierarchical clustering are intensively discussed. Methods on taxonomy generation using my research on hierarchical clustering are developed. The related vertical search engine techniques are practically used in Disaster Management Domain. Especially, three disaster information management systems are developed and represented as real use cases of my research work.
Resumo:
VANTI, Nadia. Mapeamento das Instituições Federais de Ensino Superior da Região Nordeste do Brasil na Web. Informação & informação, Londrina, v. 15, p. 55-67, 2010
Resumo:
VANTI, Nadia. Mapeamento das Instituições Federais de Ensino Superior da Região Nordeste do Brasil na Web. Informação & informação, Londrina, v. 15, p. 55-67, 2010
Resumo:
This thesis investigates how web search evaluation can be improved using historical interaction data. Modern search engines combine offline and online evaluation approaches in a sequence of steps that a tested change needs to pass through to be accepted as an improvement and subsequently deployed. We refer to such a sequence of steps as an evaluation pipeline. In this thesis, we consider the evaluation pipeline to contain three sequential steps: an offline evaluation step, an online evaluation scheduling step, and an online evaluation step. In this thesis we show that historical user interaction data can aid in improving the accuracy or efficiency of each of the steps of the web search evaluation pipeline. As a result of these improvements, the overall efficiency of the entire evaluation pipeline is increased. Firstly, we investigate how user interaction data can be used to build accurate offline evaluation methods for query auto-completion mechanisms. We propose a family of offline evaluation metrics for query auto-completion that represents the effort the user has to spend in order to submit their query. The parameters of our proposed metrics are trained against a set of user interactions recorded in the search engine’s query logs. From our experimental study, we observe that our proposed metrics are significantly more correlated with an online user satisfaction indicator than the metrics proposed in the existing literature. Hence, fewer changes will pass the offline evaluation step to be rejected after the online evaluation step. As a result, this would allow us to achieve a higher efficiency of the entire evaluation pipeline. Secondly, we state the problem of the optimised scheduling of online experiments. We tackle this problem by considering a greedy scheduler that prioritises the evaluation queue according to the predicted likelihood of success of a particular experiment. This predictor is trained on a set of online experiments, and uses a diverse set of features to represent an online experiment. Our study demonstrates that a higher number of successful experiments per unit of time can be achieved by deploying such a scheduler on the second step of the evaluation pipeline. Consequently, we argue that the efficiency of the evaluation pipeline can be increased. Next, to improve the efficiency of the online evaluation step, we propose the Generalised Team Draft interleaving framework. Generalised Team Draft considers both the interleaving policy (how often a particular combination of results is shown) and click scoring (how important each click is) as parameters in a data-driven optimisation of the interleaving sensitivity. Further, Generalised Team Draft is applicable beyond domains with a list-based representation of results, i.e. in domains with a grid-based representation, such as image search. Our study using datasets of interleaving experiments performed both in document and image search domains demonstrates that Generalised Team Draft achieves the highest sensitivity. A higher sensitivity indicates that the interleaving experiments can be deployed for a shorter period of time or use a smaller sample of users. Importantly, Generalised Team Draft optimises the interleaving parameters w.r.t. historical interaction data recorded in the interleaving experiments. Finally, we propose to apply the sequential testing methods to reduce the mean deployment time for the interleaving experiments. We adapt two sequential tests for the interleaving experimentation. We demonstrate that one can achieve a significant decrease in experiment duration by using such sequential testing methods. The highest efficiency is achieved by the sequential tests that adjust their stopping thresholds using historical interaction data recorded in diagnostic experiments. Our further experimental study demonstrates that cumulative gains in the online experimentation efficiency can be achieved by combining the interleaving sensitivity optimisation approaches, including Generalised Team Draft, and the sequential testing approaches. Overall, the central contributions of this thesis are the proposed approaches to improve the accuracy or efficiency of the steps of the evaluation pipeline: the offline evaluation frameworks for the query auto-completion, an approach for the optimised scheduling of online experiments, a general framework for the efficient online interleaving evaluation, and a sequential testing approach for the online search evaluation. The experiments in this thesis are based on massive real-life datasets obtained from Yandex, a leading commercial search engine. These experiments demonstrate the potential of the proposed approaches to improve the efficiency of the evaluation pipeline.
Resumo:
Tecnologias da Web Semântica como RDF, OWL e SPARQL sofreram nos últimos anos um forte crescimento e aceitação. Projectos como a DBPedia e Open Street Map começam a evidenciar o verdadeiro potencial da Linked Open Data. No entanto os motores de pesquisa semânticos ainda estão atrasados neste crescendo de tecnologias semânticas. As soluções disponíveis baseiam-se mais em recursos de processamento de linguagem natural. Ferramentas poderosas da Web Semântica como ontologias, motores de inferência e linguagens de pesquisa semântica não são ainda comuns. Adicionalmente a esta realidade, existem certas dificuldades na implementação de um Motor de Pesquisa Semântico. Conforme demonstrado nesta dissertação, é necessária uma arquitectura federada de forma a aproveitar todo o potencial da Linked Open Data. No entanto um sistema federado nesse ambiente apresenta problemas de performance que devem ser resolvidos através de cooperação entre fontes de dados. O standard actual de linguagem de pesquisa na Web Semântica, o SPARQL, não oferece um mecanismo para cooperação entre fontes de dados. Esta dissertação propõe uma arquitectura federada que contém mecanismos que permitem cooperação entre fontes de dados. Aborda o problema da performance propondo um índice gerido de forma centralizada assim como mapeamentos entre os modelos de dados de cada fonte de dados. A arquitectura proposta é modular, permitindo um crescimento de repositórios e funcionalidades simples e de forma descentralizada, à semelhança da Linked Open Data e da própria World Wide Web. Esta arquitectura trabalha com pesquisas por termos em linguagem natural e também com inquéritos formais em linguagem SPARQL. No entanto os repositórios considerados contêm apenas dados em formato RDF. Esta dissertação baseia-se em múltiplas ontologias partilhadas e interligadas.
Resumo:
Several Web-based on-line judges or on-line programming trainers have been developed in order to allow students to train their programming skills. However, their pedagogical functionalities in the learning of programming have not been clearly defined. EduJudge is a project which aims to integrate the “UVA On-line Judge”, an existing on-line programming trainer with an important number of problems and users, into an effective educational environment consisting of the e-learning platform Moodle and the competitive learning tool QUESTOURnament. The result is the EduJudge system which allows teachers to apply different pedagogical approaches using a proven e-learning platform, makes problems easy to search through an effective search engine, and provides an automated evaluation of the solutions submitted to these problems. The final objective is to provide new learning strategies to motivate students and present programming as an easy and attractive challenge. EduJudge has been tried and tested in three algorithms and programming courses in three different Engineering degrees. The students’ motivation and satisfaction levels were analysed alongside the effects of the EduJudge system on students’ academic outcomes. Results indicate that both students and teachers found that among other multiple benefits the EduJudge system facilitates the learning process. Furthermore, the experi- ment also showed an improvement in students’ academic outcomes. It must be noted that the students’ level of satisfaction did not depend on their computer skills or their gender.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
Grande parte do tráfego online tem origem em páginas de resultados de motores de de pesquisa. Estes constituem hoje uma ferramenta fundamental de que os turistas se socorrem para pesquisar e filtrar a informação necessária ao planeamento das suas viagens, sendo, por isso, bastante tidos em conta pelas entidades ligadas ao turismo no momento da definição das suas estratégias de marketing. No presente documento é descrita a investigação feita em torno do modo de funcionamento do motor de pesquisa Google e das métricas que utiliza para avaliação de websites e páginas web. Desta investigação resultou a implementação de um website de conteúdos afetos ao mercado de turismo e viagens em Portugal, focado no mercado do turismo externo – All About Portugal. A implementação do website pretende provar, sustentando-se em orientações da área do SEO, que a propagação de conteúdos baseada unicamente nos motores de pesquisa é viável, confirmando, deste modo, a sua importância. Os dados de utilização desse mesmo website introduzem novos elementos que poderão servir de base a novos estudos.
Resumo:
Mashup de funcionalitats, basat en un cercador intel·ligent, en aquest cas pensat per a cursos, carreres màsters, etc. La finalitat és adjuntar diverses aplicacions amb l’únic propòsit que en aquest cas és un buscador però que també ens permet utilitzar eines per a la connectivitat mitjançant web Services, o xarxes socials.
Resumo:
Construcció d'un servei de cerca automatitzada de notícies. La cerca es realitza en els canals RSS als quals es vol subscriure l'usuari.
Resumo:
Este trabajo muestra una ontología de carácter turístico que relaciona puntos de interés turístico con documentos digitales en Internet. El proyecto también incluye un prototipo de buscador para mostrar el funcionamiento de la ontología