917 resultados para World Wide Web (Information Retrieval System)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To evaluate web-based information on bipolar disorder and to assess particular content quality indicators. METHODS: Two keywords, "bipolar disorder" and "manic depressive illness" were entered into popular World Wide Web search engines. Websites were assessed with a standardized proforma designed to rate sites on the basis of accountability, presentation, interactivity, readability and content quality. "Health on the Net" (HON) quality label, and DISCERN scale scores were used to verify their efficiency as quality indicators. RESULTS: Of the 80 websites identified, 34 were included. Based on outcome measures, the content quality of the sites turned-out to be good. Content quality of web sites dealing with bipolar disorder is significantly explained by readability, accountability and interactivity as well as a global score. CONCLUSIONS: The overall content quality of the studied bipolar disorder websites is good.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ongoing growth of the World Wide Web, catalyzed by the increasing possibility of ubiquitous access via a variety of devices, continues to strengthen its role as our prevalent information and commmunication medium. However, although tools like search engines facilitate retrieval, the task of finally making sense of Web content is still often left to human interpretation. The vision of supporting both humans and machines in such knowledge-based activities led to the development of different systems which allow to structure Web resources by metadata annotations. Interestingly, two major approaches which gained a considerable amount of attention are addressing the problem from nearly opposite directions: On the one hand, the idea of the Semantic Web suggests to formalize the knowledge within a particular domain by means of the "top-down" approach of defining ontologies. On the other hand, Social Annotation Systems as part of the so-called Web 2.0 movement implement a "bottom-up" style of categorization using arbitrary keywords. Experience as well as research in the characteristics of both systems has shown that their strengths and weaknesses seem to be inverse: While Social Annotation suffers from problems like, e. g., ambiguity or lack or precision, ontologies were especially designed to eliminate those. On the contrary, the latter suffer from a knowledge acquisition bottleneck, which is successfully overcome by the large user populations of Social Annotation Systems. Instead of being regarded as competing paradigms, the obvious potential synergies from a combination of both motivated approaches to "bridge the gap" between them. These were fostered by the evidence of emergent semantics, i. e., the self-organized evolution of implicit conceptual structures, within Social Annotation data. While several techniques to exploit the emergent patterns were proposed, a systematic analysis - especially regarding paradigms from the field of ontology learning - is still largely missing. This also includes a deeper understanding of the circumstances which affect the evolution processes. This work aims to address this gap by providing an in-depth study of methods and influencing factors to capture emergent semantics from Social Annotation Systems. We focus hereby on the acquisition of lexical semantics from the underlying networks of keywords, users and resources. Structured along different ontology learning tasks, we use a methodology of semantic grounding to characterize and evaluate the semantic relations captured by different methods. In all cases, our studies are based on datasets from several Social Annotation Systems. Specifically, we first analyze semantic relatedness among keywords, and identify measures which detect different notions of relatedness. These constitute the input of concept learning algorithms, which focus then on the discovery of synonymous and ambiguous keywords. Hereby, we assess the usefulness of various clustering techniques. As a prerequisite to induce hierarchical relationships, our next step is to study measures which quantify the level of generality of a particular keyword. We find that comparatively simple measures can approximate the generality information encoded in reference taxonomies. These insights are used to inform the final task, namely the creation of concept hierarchies. For this purpose, generality-based algorithms exhibit advantages compared to clustering approaches. In order to complement the identification of suitable methods to capture semantic structures, we analyze as a next step several factors which influence their emergence. Empirical evidence is provided that the amount of available data plays a crucial role for determining keyword meanings. From a different perspective, we examine pragmatic aspects by considering different annotation patterns among users. Based on a broad distinction between "categorizers" and "describers", we find that the latter produce more accurate results. This suggests a causal link between pragmatic and semantic aspects of keyword annotation. As a special kind of usage pattern, we then have a look at system abuse and spam. While observing a mixed picture, we suggest that an individual decision should be taken instead of disregarding spammers as a matter of principle. Finally, we discuss a set of applications which operationalize the results of our studies for enhancing both Social Annotation and semantic systems. These comprise on the one hand tools which foster the emergence of semantics, and on the one hand applications which exploit the socially induced relations to improve, e. g., searching, browsing, or user profiling facilities. In summary, the contributions of this work highlight viable methods and crucial aspects for designing enhanced knowledge-based services of a Social Semantic Web.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Degut a la falta d'informació, de temps, no saber a on buscar. . . moltes vegades no ens assabentem, o ho fem massa tard, d'events als que ens hauria agradat assistir, com podrien ser concerts,conferències, activitats esportives, etc. L'objectiu d'aquest projecte serà aprofitar les capacitats de les xarxes socials per crear un lloc web que permeti enviar i geolocalitzar events que podran ser revisats i promoguts pels usuaris, de forma que es pugui suplir aquesta mancança. La solució implementada haurà de proporcionar les següents funcionalitats: enviament d'events (permetrà afegir les dades principals d'un event i geolocalitzar-lo en el mapa); organització de la informació (es disposarà de categories i metacategories per agrupar els events, a més d'un sistema d'etiquetes que facilitarà les cerques en el contingut del web); exploració dels events existents (mitjançant el mapa es podrà veure les dades de qualsevol event); sistema de votació (atorgarà la capacitat per poder decidir quina informació és més rellevant); agenda personal (servirà per registrar events i d'aquesta manera poder rebre notificacions que informin de canvis o simplement que serveixin com a recordatori); comunicació entre usuaris (es realitzarà a través de comentaris al peu dels events i/o d'un xat intern); sindicació web (distribuirà el contingut utilitzant l'estàndard RSS; disponibilitat d'una API simple (permetrà l'accés a certa informació des d'aplicacions externes)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hypermedia systems based on the Web for open distance education are becoming increasingly popular as tools for user-driven access learning information. Adaptive hypermedia is a new direction in research within the area of user-adaptive systems, to increase its functionality by making it personalized [Eklu 961. This paper sketches a general agents architecture to include navigational adaptability and user-friendly processes which would guide and accompany the student during hislher learning on the PLAN-G hypermedia system (New Generation Telematics Platform to Support Open and Distance Learning), with the aid of computer networks and specifically WWW technology [Marz 98-1] [Marz 98-2]. The PLAN-G actual prototype is successfully used with some informatics courses (the current version has no agents yet). The propased multi-agent system, contains two different types of adaptive autonomous software agents: Personal Digital Agents {Interface), to interacl directly with the student when necessary; and Information Agents (Intermediaries), to filtrate and discover information to learn and to adapt navigation space to a specific student

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O comércio eletrônico é uma força que promete mudar definitivamente o cenário das empresas e seu relacionamento com clientes, fornecedores e parceiros. De olho no crescimento explosivo da Internet e da Web, as empresas estão aprendendo a utilizar o comércio eletrônico em seus processos de negócio tanto quanto nas suas estratégias de integração interna e externa. Os bancos têm, desde há muito, estado entre as empresas que mais investem em tecnologia de informação para apoiar os seus processos de negócio, buscar a eficiência empresarial e aumentar a qualidade dos seus serviços. Tendo suas origens na convergência tecnológica entre a computação e a telecomunicação, o comércio eletrônico abre um leque de oportunidades aos bancos que há muito buscam alternativas para oferecer aos seus clientes a possibilidade de operar serviços bancários remotamente, sem a necessidade de deslocamento até Uma agência. Mas a importância da evolução do comércio eletrônico para os bancos é ainda maior se for considerado que não há comércio sem pagamento e os bancos são os principais responsáveis pela manutenção de um sistema de pagamento confiável e versátil. Portanto, a crescente utilização da Web e da Internet pelos bancos implica também na consolidação do comércio eletrônico, tanto pela sua importância no controle dos meios de pagamento quanto na confiabilidade que transmitem aos usuários dos sistemas eletrônicos de transferência de valor. Este trabalho tem como objetivo estabelecer critérios para avaliar a evolução do comércio eletrônico nos serviços bancários e a difusão do uso da Web entre os bancos. A partir da observação de que os bancos podem utilizar a Web para divulgar informação, distribuir produtos e serviços e melhorar o relacionamento com os clientes, foram realizadas pesquisas através de análises em Web sites, questionários e entrevistas, para caracterizar o ritmo, a direção, os determinantes e as implicações da evolução do uso da Web entre os bancos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The World Wide Web has been consolidated over the last years as a standard platform to provide software systems in the Internet. Nowadays, a great variety of user applications are available on the Web, varying from corporate applications to the banking domain, or from electronic commerce to the governmental domain. Given the quantity of information available and the quantity of users dealing with their services, many Web systems have sought to present recommendations of use as part of their functionalities, in order to let the users to have a better usage of the services available, based on their profile, history navigation and system use. In this context, this dissertation proposes the development of an agent-based framework that offers recommendations for users of Web systems. It involves the conception, design and implementation of an object-oriented framework. The framework agents can be plugged or unplugged in a non-invasive way in existing Web applications using aspect-oriented techniques. The framework is evaluated through its instantiation to three different Web systems

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last years there was an exponential growth in the offering of Web-enabled distance courses and in the number of enrolments in corporate and higher education using this modality. However, the lack of efficient mechanisms that assures user authentication in this sort of environment, in the system login as well as throughout his session, has been pointed out as a serious deficiency. Some studies have been led about possible biometric applications for web authentication. However, password based authentication still prevails. With the popularization of biometric enabled devices and resultant fall of prices for the collection of biometric traits, biometrics is reconsidered as a secure remote authentication form for web applications. In this work, the face recognition accuracy, captured on-line by a webcam in Internet environment, is investigated, simulating the natural interaction of a person in the context of a distance course environment. Partial results show that this technique can be successfully applied to confirm the presence of users throughout the course attendance in an educational distance course. An efficient client/server architecture is also proposed. © 2009 Springer Berlin Heidelberg.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the abundant availability of protocols and application for peer-to-peer file sharing, several drawbacks are still present in the field. Among most notable drawbacks is the lack of a simple and interoperable way to share information among independent peer-to-peer networks. Another drawback is the requirement that the shared content can be accessed only by a limited number of compatible applications, making impossible their access to others applications and system. In this work we present a new approach for peer-to-peer data indexing, focused on organization and retrieval of metadata which describes the shared content. This approach results in a common and interoperable infrastructure, which provides a transparent access to data shared on multiple data sharing networks via a simple API. The proposed approach is evaluated using a case study, implemented as a cross-platform extension to Mozilla Firefox browser, and demonstrates the advantages of such interoperability over conventional distributed data access strategies. © 2009 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents software architecture for a web-based system to aid project managing, conceptually founded on guidelines of the Project Management Body of Knowledge (PMBoK) and on ISO/IEC 9126, as well as on the result of an empiric study done in Brazil. Based on these guidelines, this study focused on two different points of view about project management: the view of those who develop software systems to aid management and the view of those who use these systems. The designed software architecture is capable of guiding an incremental development of a quality system that will satisfy today's marketing necessities, principally those of small and medium size enterprises.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work describes a new web system to aid project management that was created to correct the principal deficiencies identified in systems having a common purpose which are at present available, as well as to follow the guidelines that are proposed in the Project Management Body of Knowledge (PMBoK) and the quality characteristics described in the ISO/IEC 9126 norm. As from the adopted methodology, the system was structured to attend the real necessities of project managers and also to contribute towards obtaining quality results from the projects. The validation of the proposed solution was done with the collaboration of professionals that used the functions available in it for a period of 15 days. Results attested to the quality and adequacy of the developed system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The uses of Information and Communication Technologies (ICT) and Web environments for creation, treatment and availability of information have supported the emergence of new social-cultural patterns represented by convergences in textual, image and audio languages. This paper describes and analyzes the National Archives Experience Digital Vaults as a digital publishing web environment and as a cultural heritage. It is a complex system - synthesizer of information design options at information setting, provides new aesthetic aspects, but specially enlarges the cognition of the subjects who interact with the environment. It also enlarges the institutional spaces that guard the collective memory beyond its role of keeping the physical patrimony collected there. Digital Vaults lies as a mix of guide and interactive catalogue to be dealt in a ludic way. The publishing design of the information held on the Archives is meant to facilitate access to knowledge. The documents are organized in a dynamic and not chronological way. They are not divided in fonds or distinct categories, but in controlled interaction of documents previously indexed and linked by the software. The software creates information design and view of documental content that can be considered a new paradigm in Information Science and are part of post-custodial regime, independent from physical spaces and institutions. Information professionals must be prepared to understand and work with the paradigmatic changes described and represented by the new hybrid digital environments; hence the importance of this paper. Cyberspace interactivity between user and the content provided by the environment design provide cooperation, collaboration and sharing knowledge actions, all features of networks, transforming culture globally.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To determine whether algorithms developed for the World Wide Web can be applied to the biomedical literature in order to identify articles that are important as well as relevant. DESIGN AND MEASUREMENTS A direct comparison of eight algorithms: simple PubMed queries, clinical queries (sensitive and specific versions), vector cosine comparison, citation count, journal impact factor, PageRank, and machine learning based on polynomial support vector machines. The objective was to prioritize important articles, defined as being included in a pre-existing bibliography of important literature in surgical oncology. RESULTS Citation-based algorithms were more effective than noncitation-based algorithms at identifying important articles. The most effective strategies were simple citation count and PageRank, which on average identified over six important articles in the first 100 results compared to 0.85 for the best noncitation-based algorithm (p < 0.001). The authors saw similar differences between citation-based and noncitation-based algorithms at 10, 20, 50, 200, 500, and 1,000 results (p < 0.001). Citation lag affects performance of PageRank more than simple citation count. However, in spite of citation lag, citation-based algorithms remain more effective than noncitation-based algorithms. CONCLUSION Algorithms that have proved successful on the World Wide Web can be applied to biomedical information retrieval. Citation-based algorithms can help identify important articles within large sets of relevant results. Further studies are needed to determine whether citation-based algorithms can effectively meet actual user information needs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Linked Data initiative offers a straight method to publish structured data in the World Wide Web and link it to other data, resulting in a world wide network of semantically codified data known as the Linked Open Data cloud. The size of the Linked Open Data cloud, i.e. the amount of data published using Linked Data principles, is growing exponentially, including life sciences data. However, key information for biological research is still missing in the Linked Open Data cloud. For example, the relation between orthologs genes and genetic diseases is absent, even though such information can be used for hypothesis generation regarding human diseases. The OGOLOD system, an extension of the OGO Knowledge Base, publishes orthologs/diseases information using Linked Data. This gives the scientists the ability to query the structured information in connection with other Linked Data and to discover new information related to orthologs and human diseases in the cloud.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Homeodomain Resource is an annotated collection of non-redundant protein sequences, three-dimensional structures and genomic information for the homeodomain protein family. Release 3.0 contains 795 full-length homeodomain-containing sequences, 32 experimentally-derived structures and 143 homeo­box loci implicated in human genetic disorders. Entries are fully hyperlinked to facilitate easy retrieval of the original records from source databases. A simple search engine with a graphical user interface is provided to query the component databases and assemble customized data sets. A new feature for this release is the addition of DNA recognition sites for all human homeodomain proteins described in the literature. The Homeodomain Resource is freely available through the World Wide Web at http://genome.nhgri.nih.gov/homeodomain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mode of access: Internet.