959 resultados para Search Engine
Resumo:
Durbin, J. & Urquhart, C. (2003). Qualitative evaluation of KA24 (Knowledge Access 24). Aberystwyth: Department of Information Studies, University of Wales Aberystwyth. Sponsorship: Knowledge Access 24 (NHS)
Estudo de hábitos de higiene oral em crianças da Escola do 1º ciclo com Jardim de Infância de Sousel
Resumo:
Monografia apresentada à Universidade Fernando Pessoa para obtenção do grau de Licenciada em Medicina Dentária
Resumo:
Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Medicina Dentária
Resumo:
Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Medicina Dentária
Resumo:
Attributing a dollar value to a keyword is an essential part of running any profitable search engine advertising campaign. When an advertiser has complete control over the interaction with and monetization of each user arriving on a given keyword, the value of that term can be accurately tracked. However, in many instances, the advertiser may monetize arrivals indirectly through one or more third parties. In such cases, it is typical for the third party to provide only coarse-grained reporting: rather than report each monetization event, users are aggregated into larger channels and the third party reports aggregate information such as total daily revenue for each channel. Examples of third parties that use channels include Amazon and Google AdSense. In such scenarios, the number of channels is generally much smaller than the number of keywords whose value per click (VPC) we wish to learn. However, the advertiser has flexibility as to how to assign keywords to channels over time. We introduce the channelization problem: how do we adaptively assign keywords to channels over the course of multiple days to quickly obtain accurate VPC estimates of all keywords? We relate this problem to classical results in weighing design, devise new adaptive algorithms for this problem, and quantify the performance of these algorithms experimentally. Our results demonstrate that adaptive weighing designs that exploit statistics of term frequency, variability in VPCs across keywords, and flexible channel assignments over time provide the best estimators of keyword VPCs.
Resumo:
Some WWW image engines allow the user to form a query in terms of text keywords. To build the image index, keywords are extracted heuristically from HTML documents containing each image, and/or from the image URL and file headers. Unfortunately, text-based image engines have merely retro-fitted standard SQL database query methods, and it is difficult to include images cues within such a framework. On the other hand, visual statistics (e.g., color histograms) are often insufficient for helping users find desired images in a vast WWW index. By truly unifying textual and visual statistics, one would expect to get better results than either used separately. In this paper, we propose an approach that allows the combination of visual statistics with textual statistics in the vector space representation commonly used in query by image content systems. Text statistics are captured in vector form using latent semantic indexing (LSI). The LSI index for an HTML document is then associated with each of the images contained therein. Visual statistics (e.g., color, orientedness) are also computed for each image. The LSI and visual statistic vectors are then combined into a single index vector that can be used for content-based search of the resulting image database. By using an integrated approach, we are able to take advantage of possible statistical couplings between the topic of the document (latent semantic content) and the contents of images (visual statistics). This allows improved performance in conducting content-based search. This approach has been implemented in a WWW image search engine prototype.
Resumo:
BACKGROUND: In recent years large bibliographic databases have made much of the published literature of biology available for searches. However, the capabilities of the search engines integrated into these databases for text-based bibliographic searches are limited. To enable searches that deliver the results expected by comparative anatomists, an underlying logical structure known as an ontology is required. DEVELOPMENT AND TESTING OF THE ONTOLOGY: Here we present the Mammalian Feeding Muscle Ontology (MFMO), a multi-species ontology focused on anatomical structures that participate in feeding and other oral/pharyngeal behaviors. A unique feature of the MFMO is that a simple, computable, definition of each muscle, which includes its attachments and innervation, is true across mammals. This construction mirrors the logical foundation of comparative anatomy and permits searches using language familiar to biologists. Further, it provides a template for muscles that will be useful in extending any anatomy ontology. The MFMO is developed to support the Feeding Experiments End-User Database Project (FEED, https://feedexp.org/), a publicly-available, online repository for physiological data collected from in vivo studies of feeding (e.g., mastication, biting, swallowing) in mammals. Currently the MFMO is integrated into FEED and also into two literature-specific implementations of Textpresso, a text-mining system that facilitates powerful searches of a corpus of scientific publications. We evaluate the MFMO by asking questions that test the ability of the ontology to return appropriate answers (competency questions). We compare the results of queries of the MFMO to results from similar searches in PubMed and Google Scholar. RESULTS AND SIGNIFICANCE: Our tests demonstrate that the MFMO is competent to answer queries formed in the common language of comparative anatomy, but PubMed and Google Scholar are not. Overall, our results show that by incorporating anatomical ontologies into searches, an expanded and anatomically comprehensive set of results can be obtained. The broader scientific and publishing communities should consider taking up the challenge of semantically enabled search capabilities.
Resumo:
Context: The development of a consolidated knowledge base for social work requires rigorous approaches to identifying relevant research. Method: The quality of 10 databases and a web search engine were appraised by systematically searching for research articles on resilience and burnout in child protection social workers. Results: Applied Social Sciences Index and Abstracts, Social Services Abstracts and Social Sciences Citation Index (SSCI) had greatest sensitivity, each retrieving more than double than any other database. PsycINFO and Cumulative Index to Nursing and Allied Health (CINAHL) had highest precision. Google Scholar had modest sensitivity and good precision in relation to the first 100 items. SSCI, Google Scholar, Medline, and CINAHL retrieved the highest number of hits not retrieved by any other database. Conclusion: A range of databases is required for even modestly comprehensive searching. Advanced database searching methods are being developed but the profession requires greater standardization of terminology to assist in information retrieval.
Resumo:
E-poltergeist takes over the user’s internet browser, automatically initiating Web searches without their permission. Web-based artwork which explores issues of user control when confronted with complex technological systems, questioning the limits of digital interactive arts as consensual reciprocal systems. e-poltergeist was a major web commission that marked an early stage of research in a larger enquiry by Craighead and Thomson into the relationship between live virtual data, global communications networks and instruction-based art, exploring how such systems can be re-contextualised within gallery environments. e-poltergeist presented the 'viewer' with a singular narrative by using live internet search-engine data that aimed to create a perpetual and virtually unstoppable cycle of search engine results, banner ads and moving windows as an interruption into the normal use of an internet browser. The work also addressed the ‘de-personalisation’ of internet use by sending a series of messages from the live search engine data that seemed to address the user directly: 'Is anyone there?'; 'Can anyone hear me?', 'Please help me!'; 'Nobody cares!' e-poltergeist makes a significant contribution to the taxonomy of new media art by dealing with the way that new media art can re-address notions of existing traditions in art such as appropriation and manipulation, instruction-based art and conceptual art. e-poltergeist was commissioned ($12,000) for 010101: Art in Technological Times, a landmark international exhibition presented by the San Francisco Museum of Modern Art, which bought together leading international practitioners working with emergent technologies, including Tatsuo Miyajima, Janet Cardiff, Brian Eno. Peer recognition of the project in the form of reviews include: Curating New Media. Gateshead: Baltic Centre for Contemporary Art. Cook, Sarah, Beryl Graham and Sarah Martin ISBN: 1093655064; The Wire; http://www.wired.com/culture/lifestyle/news/2000/12/40464 (review by Reena Jana); Leonardo (review Barbara Lee Williams and Sonya Rapoport) http://www.leonardo.info/reviews/feb2001/ex_010101_willrapop.html All the work is developed jointly and equally between Craighead and her collaborator, Jon Thomson, Slade School of Fine Art.
Resumo:
Artwork using Internet search engine technology to make people’s online desires, interests and orientations visible, presenting random search term enquiries in a variety of forms including a railway information sign, an art gallery installation and an online website. activity, curiosity and desire. The project sampled and analysed how ‘search terms’ were used by the public as live data. It then re-presented them on a website, in a gallery and latterly on a bespoke mechanical railway flap-sign, thus creating a snapshot of online enquiry at any give time. Beacon’s originality lies in the manner in which it has taken abstract digital data and found different expressions for it. Thus the work extends debates in media arts that focus on purely virtual and online expressions of data, by developing online information into new non-digital material forms and contexts such as railway signs. This research has been developed over a three year period. Initially with software only and then on receipt of AHRC small grant (£5000) with the lauded Italian manufacturer Solari of Udine, Italy and BFI Southbank. It represents the culmination of a body of research that asks whether live data can be used as material to make artworks. Beacon was specially developed for the Tate Britain programme 40 artists 40 days, produced in conjunction with the UK Olympic Games bid and intended to “create a unique countdown calendar that will focus attention on Britain’s exceptional creative talent”. The project is exhibited by the Tate website ‘Tate Online’ presently in perpetuity. The gallery version of this work is currently held in five private collections in the USA and is shown regularly in galleries around the world. The railway flap-sign is owned by BFI Southbank and will eventually be sited there permanently. All work is developed jointly and equally between Craighead and her collaborator, Jon Thomson, (Slade).
Resumo:
Trabalho de Projeto apresentado ao Instituto Superior de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Marketing Digital, sob orientação do Mestre António da Silva Vieira
Resumo:
Grande parte do tráfego online tem origem em páginas de resultados de motores de de pesquisa. Estes constituem hoje uma ferramenta fundamental de que os turistas se socorrem para pesquisar e filtrar a informação necessária ao planeamento das suas viagens, sendo, por isso, bastante tidos em conta pelas entidades ligadas ao turismo no momento da definição das suas estratégias de marketing. No presente documento é descrita a investigação feita em torno do modo de funcionamento do motor de pesquisa Google e das métricas que utiliza para avaliação de websites e páginas web. Desta investigação resultou a implementação de um website de conteúdos afetos ao mercado de turismo e viagens em Portugal, focado no mercado do turismo externo – All About Portugal. A implementação do website pretende provar, sustentando-se em orientações da área do SEO, que a propagação de conteúdos baseada unicamente nos motores de pesquisa é viável, confirmando, deste modo, a sua importância. Os dados de utilização desse mesmo website introduzem novos elementos que poderão servir de base a novos estudos.
Resumo:
With the recent technological development, we have been witnessing a progressive loss of control over our personal information. Whether it is the speed in which it spreads over the internet or the permanent storage of information on cloud services, the means by which our personal information escapes our control are vast. Inevitably, this situation allowed serious violations of personal rights. The necessity to reform the European policy for protection of personal information is emerging, in order to adapt to the technological era we live in. Granting individuals the ability to delete their personal information, mainly the information which is available on the Internet, is the best solution for those whose rights have been violated. However, once supposedly deleted from the website the information is still shown in search engines. In this context, “the right to be forgotten in the internet” is invoked. Its implementation will result in the possibility for any person to delete and stop its personal information from being spread through the internet in any way, especially through search engines directories. This way we will have a more comprehensive control over our personal information in two ways: firstly, by allowing individuals to completely delete their information from any website and cloud service and secondly by limiting access of search engines to the information. This way, it could be said that a new and catchier term has been found for an “old” right.
Resumo:
Court rapport de la conférence Access 2005 tenue à Edmonton, Canada. Résumé des idées et tendances principales exposées.
Resumo:
Introduction: Coordination through CVHL/BVCS gives Canadian health libraries access to information technology they could not offer individually, thereby enhancing the library services offered to Canadian health professionals. An example is the portal being developed. Portal best practices are of increasing interest (usability.gov; Wikipedia portals; JISC subject portal project; Stanford clinical portals) but conclusive research is not yet available. This paper will identify best practices for a portal bringing together knowledge for Canadian health professionals supported through a network of libraries. Description: The portal for Canadian health professionals will include capabilities such as: • Authentication • Question referral • Specialist “branch libraries” • Integration of commercial resources, web resources and health systems data • Cross-resource search engine • Infrastructure to enable links from EHR and decision support systems • Knowledge translation tools, such as highlighting of best evidence Best practices will be determined by studying the capabilities of existing portals, including consortia/networks and individual institutions, and through a literature review. Outcomes: Best practices in portals will be reviewed. The collaboratively developed Virtual Library, currently the heart of cvhl.ca, is a unique database collecting high quality, free web documents and sites relevant to Canadian health care. The evident strengths of the Virtual Library will be discussed in light of best practices. Discussion: Identification of best practices will support cost-benefit analysis of options and provide direction for CVHL/BVCS. Open discussion with stakeholders (libraries and professionals) informed by this review will lead to adoption of the best technical solutions supporting Canadian health libraries and their users.