991 resultados para Web-searching


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is a handout to describe how to use EndNote Web v2.7. It is focused on the BioMedical area and covers linking to PubMed, Web of Knowledge, other bibliographic providers (OVID and EBSCO) and searching for book information. The notes include how to use Word 2003 and Word 2007

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To efficiently and yet accurately cluster Web documents is of great interests to Web users and is a key component of the searching accuracy of a Web search engine. To achieve this, this paper introduces a new approach for the clustering of Web documents, which is called maximal frequent itemset (MFI) approach. Iterative clustering algorithms, such as K-means and expectation-maximization (EM), are sensitive to their initial conditions. MFI approach firstly locates the center points of high density clusters precisely. These center points then are used as initial points for the K-means algorithm. Our experimental results tested on 3 Web document sets show that our MFI approach outperforms the other methods we compared in most cases, particularly in the case of large number of categories in Web document sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The World Wide Web has had an impact on many areas of teaching and learning. Mathematics teaching however has only recently begun to utilise and develop this educational resource. This paper outlines a research program, which aims to uncover the extent the Internet, in particular the World Wide Web, is being used for High School mathematics education. The program includes searching out discernible Web-based teaching strategies and examining their impact on mathematics teaching and learning attitudes and achievements. Of particular interest is the extent to which deployment of the Web in mathematics teaching might increase student interest in mathematics. The first step in this
process is to develop a preliminary typology of mathematical elements on the Web. The nature of these elements, their categorisation and their possible roles in the teaching and learning of mathematics are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ubiquity of the Internet and Web has led to the emergency of several Web search engines with varying capabilities. A weakness of existing search engines is the very extensive amount of hits that they can produce. Moreover, only a small number of web users actually know how to utilize the true power of Web search engines. Therefore, there is a need for searching infrastructure to help ease and guide the searching efforts of web users toward their desired objectives. In this paper, we propose a context-based meta-search engine and discuss its implementation on top of the actual Google.com search engine. The proposed meta-search engine benefits the user the most when the user does not know what exact document he or she is looking for. Comparison of the context-based meta-search engine with both Google and Guided Google shows that the results returned by context-based meta-search engine is much more intuitive and accurate than the results returned by both Google and Guided Google.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This poster presents research-in-progress into the educational affordances of so-called Web 2.0 sites, services, with a particular emphasis on those applications that involve forms of shared human-machine cognition and that promote public knowledge networking. This research involves reviewing many hundreds of Web 2.0 tools and selecting approximately 50 for further analysis and exploration as learning applications. In doing so, the research will generate examples of unusual affordances provided by Web 2.0; it will also present a more structured categorisation of the kinds of uses and benefits of these tools. This approach is valuable because much current research and analysis of the impact of Web 2.0 on education, particularly higher education, has emphasised a relatively limited array of tools – principally blogs, wikis and social networking services – that offer educators and students opportunities for student-led collaborative work. Such opportunities involve strong emphasis on constructivist pedagogy: students’ interactions with each other, mediated via the Internet, are viewed as the positive benefit which networked learning can provide. However, Web 2.0 is far more than just collaboration, and associated shared self-expression. In particular, Web 2.0 includes many examples of services that take one form of input from a user and, rather than just sharing it with others, enable the transformation of that input into different forms, either as visualisations, maps, or other re-representations. Web 2.0 is also starting to see the development of knowledge-work engines that embody the concept of shared cognition, in which the service and the user cooperate in the production of some final knowledge output or which present to users knowledge that has already been processed more extensively than through simple searching. Web 2.0 is also closely associated with the idea that knowledge work is now networked and distributed; it involves users appropriating, creating and sharing knowledge products in a very public way, far beyond the narrow ‘audience’ of a particular course or program of study. The research presented in this poster will provide, firstly, examples of the Web 2.0 tools which emphasise these additional ways of exploiting the Internet for networked learning; secondly, the research will provide a first iteration of the overarching structure of categories and classifications which can be used to assess any proposed Web 2.0 application in terms of its affordances for learning as knowledge networking. By understanding these technologies, truly collaborative networked learning can be developed that blends with the emerging cultures of online behaviour increasingly common to contemporary student populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A web operating system is an operating system that users can access from any hardware at any location. A peer-to-peer (P2P) grid uses P2P communication for resource management and communication between nodes in a grid and manages resources locally in each cluster, and this provides a proper architecture for a web operating system. Use of semantic technology in web operating systems is an emerging field that improves the management and discovery of resources and services. In this paper, we propose PGSW-OS (P2P grid semantic Web OS), a model based on a P2P grid architecture and semantic technology to improve resource management in a web operating system through resource discovery with the aid of semantic features. Our approach integrates distributed hash tables (DHTs) and semantic overlay networks to enable semantic-based resource management by advertising resources in the DHT based upon their annotations to enable semantic-based resource matchmaking. Our model includes ontologies and virtual organizations. Our technique decreases the computational complexity of searching in a web operating system environment. We perform a simulation study using the Gridsim simulator, and our experiments show that our model provides enhanced utilization of resources, better search expressiveness, scalability, and precision. © 2014 Springer Science+Business Media New York.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Informação - FFC

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, the author presents a query language for an RDF (Resource Description Framework) database and discusses its applications in the context of the HELM project (the Hypertextual Electronic Library of Mathematics). This language aims at meeting the main requirements coming from the RDF community. in particular it includes: a human readable textual syntax and a machine-processable XML (Extensible Markup Language) syntax both for queries and for query results, a rigorously exposed formal semantics, a graph-oriented RDF data access model capable of exploring an entire RDF graph (including both RDF Models and RDF Schemata), a full set of Boolean operators to compose the query constraints, fully customizable and highly structured query results having a 4-dimensional geometry, some constructions taken from ordinary programming languages that simplify the formulation of complex queries. The HELM project aims at integrating the modern tools for the automation of formal reasoning with the most recent electronic publishing technologies, in order create and maintain a hypertextual, distributed virtual library of formal mathematical knowledge. In the spirit of the Semantic Web, the documents of this library include RDF metadata describing their structure and content in a machine-understandable form. Using the author's query engine, HELM exploits this information to implement some functionalities allowing the interactive and automatic retrieval of documents on the basis of content-aware requests that take into account the mathematical nature of these documents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A social Semantic Web empowers its users to have access to collective Web knowledge in a simple manner, and for that reason, controlling online privacy and reputation becomes increasingly important, and must be taken seriously. This chapter presents Fuzzy Cognitive Maps (FCM) as a vehicle for Web knowledge aggregation, representation, and reasoning. With this in mind, a conceptual framework for Web knowledge aggregation, representation, and reasoning is introduced along with a use case, in which the importance of investigative searching for online privacy and reputation is highlighted. Thereby it is demonstrated how a user can establish a positive online presence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complementary and alternative medicine (CAM) use is growing rapidly. As CAM is relatively unregulated, it is important to evaluate the type and availability of CAM information. The goal of this study is to deter-mine the prevalence, content and readability of online CAM information based on searches for arthritis, diabetes and fibromyalgia using four common search engines. Fifty-eight of 599 web pages retrieved by a "condition search" (9.6%) were CAM-oriented. Of 216 CAM pages found by the "condition" and "condition + herbs" searches, 78% were authored by commercial organizations, whose pur-pose involved commerce 69% of the time and 52.3% had no references. Although 98% of the CAM information was intended for consumers, the mean read-ability was at grade level 11. We conclude that consumers searching the web for health information are likely to encounter consumer-oriented CAM advertising, which is difficult to read and is not supported by the conventional literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At the University of Connecticut, we have been enticing graduate students to join graduate student trainers to learn how to answer the following questions and improve the breadth of their research: Do you need to find articles published outside your primary discipline? What are some seminal articles in your field? Have you ever wanted to know who cited an article you wrote? We are participating in Elsevier's Student Ambassador Program (SAmP) in which graduate students train their peers on "citation searching" research using Scopus and Web of Science, two tremendous citation databases. We are in the fourth semester of these training programs, and they are wildly successful: We have offered more than 30 classes and taught more than 350 students from March 2007 through March 2008.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biomedical researchers and clinicians working with molecular technologies in routine clinical practice often need to review the available literature to gather information regarding specific sequences of nucleic acids. This includes, for instance, finding articles related to a concrete DNA sequence, or identifying empirically-validated primer/probe sequences to evaluate the presence of different micro-organisms. Unfortunately, these hard and time-consuming tasks often need to be manually performed by researchers themselves since no publicly available biomedical literature search engine, e.g. PubMed, PubMed Central (PMC), etc., provides the required search functionalities. In this article, we describe PubDNA Finder, a web service that enables users to perform advanced searches on PubMed Central-indexed full text articles with sequences of nucleic acids

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.