923 resultados para E-Commerce, Web Search Engines


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article explains the social transformation process initiated at the end of the 1970s within the neighborhood of La Verneda-Sant Martı´ in Barcelona. This process started with the foundation of an adult education center that was organized as a Learning Community (the first one in the world). From the beginning, it was administered for and by the community. It became a space of debate where the demands and dreams of the neighbors converged about transforming their neighborhood along with the recommendations of the international scientific community. Twenty years later, the dreams came true: There have been substantial improvements throughout the urban space, infrastructures, housing, urban thoroughfares, and public highways. The INCLUD-ED European project, using the communicative methodology of research, has thoroughly studied the transformation carried out in the La Verneda-Sant Martı´ Adult School and its neighborhood. INCLUD-ED has identified successful practices within diverse social areas that are transferable to other contexts and contribute to overcoming inequalities and improving the most underprivileged neighborhoods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La Comunidad de Aprendizaje Escuela de personas adultas La Verneda-Sant Martí, lleva más de 10 años trabajando las tecnologías de la información y la comunicación desde una perspectiva transversal y global. A través del trabajo que venimos realizando día a día, hemos visto como las TIC han pasado de ser una necesidad formativa a convertirse en un contexto de aprendizaje cotidiano entre las personas que participan de nuestro proyecto. Desde que en 1999 decidimos transformar nuestras actividades para integrarnos plenamente en la sociedad de la información hemos cambiado metodologías, prioridades y formas de trabajar. Mostrar cuáles han sido esas transformaciones y qué resultados son los que se han alcanzado es el principal reto del artículo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’objectiu dels Serveis Intel·ligents d’Atenció Ciutadana (SAC) és donar resposta a les necessitats d'informació dels ciutadans sobre els serveis i les actuacions del municipi i, per extensió, del conjunt del serveis d'interès ciutadà. Des que l’ iSAC s’ha posat en funcionament, periòdicament s’analitzen les consultes que es fan en el sistema i el grau de satisfacció que la ciutadania té d’aquest servei. Tot i que en general les valoracions són satisfactòries s’ha observat que actualment aquest sistema té un buit, hi ha un ampli ventall de respostes que, de moment, l’iSAC no és capaç de resoldre, i possiblement el 010, el call center del servei d’atenció ciutadana, tampoc. Algunes de les cerques realitzades marxen molt de l’àmbit municipal i és l’experiència de la mateixa ciutadania la que pot oferir un millor resultat. És per aquest motiu que ha sorgit la necessitat de crear wikiSAC. Eina que te com a principals objectius que: poder crear, modificar i eliminar el contingut d’una pàgina de forma interactiva de manera fàcil i ràpida a través d’un navegador web; controlar els continguts ofensius i malintencionats; conservar un historial de canvis; incentivar la participació ciutadana i aconseguir que aquest sigui un lloc on els ciutadans preguntin, suggereixin i opinin sobre temes relacionats amb el seu municipi i aconseguir que els ciutadans es sentin més integrats amb el funcionament de l’administració, col∙laborant en les tasques d’informació i atenció ciutadana

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To characterize PubMed usage over a typical day and compare it to previous studies of user behavior on Web search engines. DESIGN: We performed a lexical and semantic analysis of 2,689,166 queries issued on PubMed over 24 consecutive hours on a typical day. MEASUREMENTS: We measured the number of queries, number of distinct users, queries per user, terms per query, common terms, Boolean operator use, common phrases, result set size, MeSH categories, used semantic measurements to group queries into sessions, and studied the addition and removal of terms from consecutive queries to gauge search strategies. RESULTS: The size of the result sets from a sample of queries showed a bimodal distribution, with peaks at approximately 3 and 100 results, suggesting that a large group of queries was tightly focused and another was broad. Like Web search engine sessions, most PubMed sessions consisted of a single query. However, PubMed queries contained more terms. CONCLUSION: PubMed's usage profile should be considered when educating users, building user interfaces, and developing future biomedical information retrieval systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Enriching knowledge bases with multimedia information makes it possible to complement textual descriptions with visual and audio information. Such complementary information can help users to understand the meaning of assertions, and in general improve the user experience with the knowledge base. In this paper we address the problem of how to enrich ontology instances with candidate images retrieved from existing Web search engines. DBpedia has evolved into a major hub in the Linked Data cloud, interconnecting millions of entities organized under a consistent ontology. Our approach taps into the Wikipedia corpus to gather context information for DBpedia instances and takes advantage of image tagging information when this is available to calculate semantic relatedness between instances and candidate images. We performed experiments with focus on the particularly challenging problem of highly ambiguous names. Both methods presented in this work outperformed the baseline. Our best method leveraged context words from Wikipedia, tags from Flickr and type information from DBpedia to achieve an average precision of 80%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last few decades, the ever-increasing output of scientific publications has led to new challenges to keep up to date with the literature. In the biomedical area, this growth has introduced new requirements for professionals, e.g., physicians, who have to locate the exact papers that they need for their clinical and research work amongst a huge number of publications. Against this backdrop, novel information retrieval methods are even more necessary. While web search engines are widespread in many areas, facilitating access to all kinds of information, additional tools are required to automatically link information retrieved from these engines to specific biomedical applications. In the case of clinical environments, this also means considering aspects such as patient data security and confidentiality or structured contents, e.g., electronic health records (EHRs). In this scenario, we have developed a new tool to facilitate query building to retrieve scientific literature related to EHRs. Results: We have developed CDAPubMed, an open-source web browser extension to integrate EHR features in biomedical literature retrieval approaches. Clinical users can use CDAPubMed to: (i) load patient clinical documents, i.e., EHRs based on the Health Level 7-Clinical Document Architecture Standard (HL7-CDA), (ii) identify relevant terms for scientific literature search in these documents, i.e., Medical Subject Headings (MeSH), automatically driven by the CDAPubMed configuration, which advanced users can optimize to adapt to each specific situation, and (iii) generate and launch literature search queries to a major search engine, i.e., PubMed, to retrieve citations related to the EHR under examination. Conclusions: CDAPubMed is a platform-independent tool designed to facilitate literature searching using keywords contained in specific EHRs. CDAPubMed is visually integrated, as an extension of a widespread web browser, within the standard PubMed interface. It has been tested on a public dataset of HL7-CDA documents, returning significantly fewer citations since queries are focused on characteristics identified within the EHR. For instance, compared with more than 200,000 citations retrieved by breast neoplasm, fewer than ten citations were retrieved when ten patient features were added using CDAPubMed. This is an open source tool that can be freely used for non-profit purposes and integrated with other existing systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trabajo que tiene por objetivo observar y describir el panorama de la investigación bibliométrica en la Argentina en el período comprendido entre los años 1984 y 2012, a partir del análisis de las publicaciones de autores de instituciones argentinas localizadas en buscadores web, repositorios temáticos y bases de datos regionales e internacionales. Interpreta las formas que revisten los ítems estudiados enfocando la atención en el volumen y evolución de la producción, tipo de literatura, idioma, temas y unidades de análisis. Calcula el índice de coautoría y tasas de colaboración nacional e internacional. Identifica los autores más productivos y las instituciones de afiliación más frecuentes. Determina la existencia de algunos grupos de investigación, caracterizando sus temáticas de investigación, las revistas donde publican y los congresos más frecuentes en los que participan

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trabajo que tiene por objetivo observar y describir el panorama de la investigación bibliométrica en la Argentina en el período comprendido entre los años 1984 y 2012, a partir del análisis de las publicaciones de autores de instituciones argentinas localizadas en buscadores web, repositorios temáticos y bases de datos regionales e internacionales. Interpreta las formas que revisten los ítems estudiados enfocando la atención en el volumen y evolución de la producción, tipo de literatura, idioma, temas y unidades de análisis. Calcula el índice de coautoría y tasas de colaboración nacional e internacional. Identifica los autores más productivos y las instituciones de afiliación más frecuentes. Determina la existencia de algunos grupos de investigación, caracterizando sus temáticas de investigación, las revistas donde publican y los congresos más frecuentes en los que participan

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trabajo que tiene por objetivo observar y describir el panorama de la investigación bibliométrica en la Argentina en el período comprendido entre los años 1984 y 2012, a partir del análisis de las publicaciones de autores de instituciones argentinas localizadas en buscadores web, repositorios temáticos y bases de datos regionales e internacionales. Interpreta las formas que revisten los ítems estudiados enfocando la atención en el volumen y evolución de la producción, tipo de literatura, idioma, temas y unidades de análisis. Calcula el índice de coautoría y tasas de colaboración nacional e internacional. Identifica los autores más productivos y las instituciones de afiliación más frecuentes. Determina la existencia de algunos grupos de investigación, caracterizando sus temáticas de investigación, las revistas donde publican y los congresos más frecuentes en los que participan

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The phenomenonal growth of the Internet has connected us to a vast amount of computation and information resources around the world. However, making use of these resources is difficult due to the unparalleled massiveness, high communication latency, share-nothing architecture and unreliable connection of the Internet. In this dissertation, we present a distributed software agent approach, which brings a new distributed problem-solving paradigm to the Internet computing researches with enhanced client-server scheme, inherent scalability and heterogeneity. Our study discusses the role of a distributed software agent in Internet computing and classifies it into three major categories by the objects it interacts with: computation agent, information agent and interface agent. The discussion of the problem domain and the deployment of the computation agent and the information agent are presented with the analysis, design and implementation of the experimental systems in high performance Internet computing and in scalable Web searching. ^ In the computation agent study, high performance Internet computing can be achieved with our proposed Java massive computation agent (JAM) model. We analyzed the JAM computing scheme and built a brutal force cipher text decryption prototype. In the information agent study, we discuss the scalability problem of the existing Web search engines and designed the approach of Web searching with distributed collaborative index agent. This approach can be used for constructing a more accurate, reusable and scalable solution to deal with the growth of the Web and of the information on the Web. ^ Our research reveals that with the deployment of the distributed software agent in Internet computing, we can have a more cost effective approach to make better use of the gigantic scale network of computation and information resources on the Internet. The case studies in our research show that we are now able to solve many practically hard or previously unsolvable problems caused by the inherent difficulties of Internet computing. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For some years now the Internet and World Wide Web communities have envisaged moving to a next generation of Web technologies by promoting a globally unique, and persistent, identifier for identifying and locating many forms of published objects . These identifiers are called Universal Resource Names (URNs) and they hold out the prospect of being able to refer to an object by what it is (signified by its URN), rather than by where it is (the current URL technology). One early implementation of URN ideas is the Unicode-based Handle technology, developed at CNRI in Reston Virginia. The Digital Object Identifier (DOI) is a specific URN naming convention proposed just over 5 years ago and is now administered by the International DOI organisation, founded by a consortium of publishers and based in Washington DC. The DOI is being promoted for managing electronic content and for intellectual rights management of it, either using the published work itself, or, increasingly via metadata descriptors for the work in question. This paper describes the use of the CNRI handle parser to navigate a corpus of papers for the Electronic Publishing journal. These papers are in PDF format and based on our server in Nottingham. For each paper in the corpus a metadata descriptor is prepared for every citation appearing in the References section. The important factor is that the underlying handle is resolved locally in the first instance. In some cases (e.g. cross-citations within the corpus itself and links to known resources elsewhere) the handle can be handed over to CNRI for further resolution. This work shows the encouraging prospect of being able to use persistent URNs not only for intellectual property negotiations but also for search and discovery. In the test domain of this experiment every single resource, referred to within a given paper, can be resolved, at least to the level of metadata about the referred object. If the Web were to become more fully URN aware then a vast directed graph of linked resources could be accessed, via persistent names. Moreover, if these names delivered embedded metadata when resolved, the way would be open for a new generation of vastly more accurate and intelligent Web search engines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conventional web search engines are centralised in that a single entity crawls and indexes the documents selected for future retrieval, and the relevance models used to determine which documents are relevant to a given user query. As a result, these search engines suffer from several technical drawbacks such as handling scale, timeliness and reliability, in addition to ethical concerns such as commercial manipulation and information censorship. Alleviating the need to rely entirely on a single entity, Peer-to-Peer (P2P) Information Retrieval (IR) has been proposed as a solution, as it distributes the functional components of a web search engine – from crawling and indexing documents, to query processing – across the network of users (or, peers) who use the search engine. This strategy for constructing an IR system poses several efficiency and effectiveness challenges which have been identified in past work. Accordingly, this thesis makes several contributions towards advancing the state of the art in P2P-IR effectiveness by improving the query processing and relevance scoring aspects of a P2P web search. Federated search systems are a form of distributed information retrieval model that route the user’s information need, formulated as a query, to distributed resources and merge the retrieved result lists into a final list. P2P-IR networks are one form of federated search in routing queries and merging result among participating peers. The query is propagated through disseminated nodes to hit the peers that are most likely to contain relevant documents, then the retrieved result lists are merged at different points along the path from the relevant peers to the query initializer (or namely, customer). However, query routing in P2P-IR networks is considered as one of the major challenges and critical part in P2P-IR networks; as the relevant peers might be lost in low-quality peer selection while executing the query routing, and inevitably lead to less effective retrieval results. This motivates this thesis to study and propose query routing techniques to improve retrieval quality in such networks. Cluster-based semi-structured P2P-IR networks exploit the cluster hypothesis to organise the peers into similar semantic clusters where each such semantic cluster is managed by super-peers. In this thesis, I construct three semi-structured P2P-IR models and examine their retrieval effectiveness. I also leverage the cluster centroids at the super-peer level as content representations gathered from cooperative peers to propose a query routing approach called Inverted PeerCluster Index (IPI) that simulates the conventional inverted index of the centralised corpus to organise the statistics of peers’ terms. The results show a competitive retrieval quality in comparison to baseline approaches. Furthermore, I study the applicability of using the conventional Information Retrieval models as peer selection approaches where each peer can be considered as a big document of documents. The experimental evaluation shows comparative and significant results and explains that document retrieval methods are very effective for peer selection that brings back the analogy between documents and peers. Additionally, Learning to Rank (LtR) algorithms are exploited to build a learned classifier for peer ranking at the super-peer level. The experiments show significant results with state-of-the-art resource selection methods and competitive results to corresponding classification-based approaches. Finally, I propose reputation-based query routing approaches that exploit the idea of providing feedback on a specific item in the social community networks and manage it for future decision-making. The system monitors users’ behaviours when they click or download documents from the final ranked list as implicit feedback and mines the given information to build a reputation-based data structure. The data structure is used to score peers and then rank them for query routing. I conduct a set of experiments to cover various scenarios including noisy feedback information (i.e, providing positive feedback on non-relevant documents) to examine the robustness of reputation-based approaches. The empirical evaluation shows significant results in almost all measurement metrics with approximate improvement more than 56% compared to baseline approaches. Thus, based on the results, if one were to choose one technique, reputation-based approaches are clearly the natural choices which also can be deployed on any P2P network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Given the significant growth of the Internet in recent years, marketers have been striving for new techniques and strategies to prosper in the online world. Statistically, search engines have been the most dominant channels of Internet marketing in recent years. However, the mechanics of advertising in such a market place has created a challenging environment for marketers to position their ads among their competitors. This study uses a unique cross-sectional dataset of the top 500 Internet retailers in North America and hierarchical multiple regression analysis to empirically investigate the effect of keyword competition on the relationship between ad position and its determinants in the sponsored search market. To this end, the study utilizes the literature in consumer search behavior, keyword auction mechanism design, and search advertising performance as the theoretical foundation. This study is the first of its kind to examine the sponsored search market characteristics in a cross-sectional setting where the level of keyword competition is explicitly captured in terms of the number of Internet retailers competing for similar keywords. Internet retailing provides an appropriate setting for this study given the high-stake battle for market share and intense competition for keywords in the sponsored search market place. The findings of this study indicate that bid values and ad relevancy metrics as well as their interaction affect the position of ads on the search engine result pages (SERPs). These results confirm some of the findings from previous studies that examined sponsored search advertising performance at a keyword level. Furthermore, the study finds that the position of ads for web-only retailers is dependent on bid values and ad relevancy metrics, whereas, multi-channel retailers are more reliant on their bid values. This difference between web-only and multi-channel retailers is also observed in the moderating effect of keyword competition on the relationships between ad position and its key determinants. Specifically, this study finds that keyword competition has significant moderating effects only for multi-channel retailers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Search has become a hot topic in Internet computing, with rival search engines battling to become the de facto Web portal, harnessing search algorithms to wade through information on a scale undreamed of by early information retrieval (IR) pioneers. This article examines how search has matured from its roots in specialized IR systems to become a key foundation of the Web. The authors describe new challenges posed by the Web's scale, and show how search is changing the nature of the Web as much as the Web has changed the nature of search

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Complementary and alternative medicine (CAM) use is growing rapidly. As CAM is relatively unregulated, it is important to evaluate the type and availability of CAM information. The goal of this study is to deter-mine the prevalence, content and readability of online CAM information based on searches for arthritis, diabetes and fibromyalgia using four common search engines. Fifty-eight of 599 web pages retrieved by a "condition search" (9.6%) were CAM-oriented. Of 216 CAM pages found by the "condition" and "condition + herbs" searches, 78% were authored by commercial organizations, whose pur-pose involved commerce 69% of the time and 52.3% had no references. Although 98% of the CAM information was intended for consumers, the mean read-ability was at grade level 11. We conclude that consumers searching the web for health information are likely to encounter consumer-oriented CAM advertising, which is difficult to read and is not supported by the conventional literature.