884 resultados para Web content aggregators


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Complementary and alternative medicine (CAM) use is growing rapidly. As CAM is relatively unregulated, it is important to evaluate the type and availability of CAM information. The goal of this study is to deter-mine the prevalence, content and readability of online CAM information based on searches for arthritis, diabetes and fibromyalgia using four common search engines. Fifty-eight of 599 web pages retrieved by a "condition search" (9.6%) were CAM-oriented. Of 216 CAM pages found by the "condition" and "condition + herbs" searches, 78% were authored by commercial organizations, whose pur-pose involved commerce 69% of the time and 52.3% had no references. Although 98% of the CAM information was intended for consumers, the mean read-ability was at grade level 11. We conclude that consumers searching the web for health information are likely to encounter consumer-oriented CAM advertising, which is difficult to read and is not supported by the conventional literature.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The future Internet is expected to be composed of a mesh of interoperable web services accessible from all over the web. This approach has not yet caught on since global user?service interaction is still an open issue. This paper states one vision with regard to next-generation front-end Web 2.0 technology that will enable integrated access to services, contents and things in the future Internet. In this paper, we illustrate how front-ends that wrap traditional services and resources can be tailored to the needs of end users, converting end users into prosumers (creators and consumers of service-based applications). To do this, we propose an architecture that end users without programming skills can use to create front-ends, consult catalogues of resources tailored to their needs, easily integrate and coordinate front-ends and create composite applications to orchestrate services in their back-end. The paper includes a case study illustrating that current user-centred web development tools are at a very early stage of evolution. We provide statistical data on how the proposed architecture improves these tools. This paper is based on research conducted by the Service Front End (SFE) Open Alliance initiative.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An effective K-12 science education is essential to succeed in future phases of the curriculum and the e-Infrastructures for education provide new opportunities to enhance it. This paper presents ViSH Viewer, an innovative web tool to consume educational content which aims to facilitate e-Science infrastructures access through a next generation learning object called "Virtual Excursion". Virtual Excursions provide a new way to explore science in class by taking advantage of e-Infrastructure resources and their integration with other educational contents, resulting in the creation of a reusable, interoperable and granular learning object. In order to better understand how this tool can allow teachers and students a joyful exploration of e-Science, we also present three Virtual Excursion examples. Details about the design, development and the tool itself are explained in this paper as well as the concept, structure and metadata of the new learning object.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Information and content integration are believed to be a possible solution to the problem of information overload in the Internet. The article is an overview of a simple solution for integration of information and content on the Web. Previous approaches to content extraction and integration are discussed, followed by introduction of a novel technology to deal with the problems, based on XML processing. The article includes lessons learned from solving issues of changing webpage layout, incompatibility with HTML standards and multiplicity of the results returned. The method adopting relative XPath queries over DOM tree proves to be more robust than previous approaches to Web information integration. Furthermore, the prototype implementation demonstrates the simplicity that enables non-professional users to easily adopt this approach in their day-to-day information management routines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In studies of media industries, too much attention has been paid to providers and firms, too little to consumers and markets. But with user-created content, the question first posed more than a generation ago by the uses & gratifications method and taken up by semiotics and the active audience tradition (‘what do audiences do with media?’), has resurfaced with renewed force. What’s new is that where this question (of what the media industries and audiences did with each other) used to be individualist and functionalist, now, with the advent of social networks using Web 2.0 affordances, it can be re-posed at the level of systems and populations as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Search engines have forever changed the way people access and discover knowledge, allowing information about almost any subject to be quickly and easily retrieved within seconds. As increasingly more material becomes available electronically the influence of search engines on our lives will continue to grow. This presents the problem of how to find what information is contained in each search engine, what bias a search engine may have, and how to select the best search engine for a particular information need. This research introduces a new method, search engine content analysis, in order to solve the above problem. Search engine content analysis is a new development of traditional information retrieval field called collection selection, which deals with general information repositories. Current research in collection selection relies on full access to the collection or estimations of the size of the collections. Also collection descriptions are often represented as term occurrence statistics. An automatic ontology learning method is developed for the search engine content analysis, which trains an ontology with world knowledge of hundreds of different subjects in a multilevel taxonomy. This ontology is then mined to find important classification rules, and these rules are used to perform an extensive analysis of the content of the largest general purpose Internet search engines in use today. Instead of representing collections as a set of terms, which commonly occurs in collection selection, they are represented as a set of subjects, leading to a more robust representation of information and a decrease of synonymy. The ontology based method was compared with ReDDE (Relevant Document Distribution Estimation method for resource selection) using the standard R-value metric, with encouraging results. ReDDE is the current state of the art collection selection method which relies on collection size estimation. The method was also used to analyse the content of the most popular search engines in use today, including Google and Yahoo. In addition several specialist search engines such as Pubmed and the U.S. Department of Agriculture were analysed. In conclusion, this research shows that the ontology based method mitigates the need for collection size estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Archaeology provides a framework of analysis and interpretation that is useful for disentangling the textual layers of a contemporary lived-in urban space. The producers and readers of texts may include those who planned and developed the site and those who now live, visit and work there. Some of the social encounters and content sharing between these people may be artificially produced or manufactured in the hope that certain social situations will occur. Others may be serendipitous. With archaeology’s original focus on places that are no longer inhabited it is often only the remaining artefacts and features of the built environment that form the basis for interpreting the social relationships of past people. Our analysis however, is framed within a contemporary notion of archaeological artefacts in an urban setting. Unlike an excavation, where the past is revealed through digging into the landscape, the application of landscape archaeology within a present day urban context is necessarily more experiential, visual and based on recording and analysing the physical traces of social encounters and relationships between residents and visitors. These physical traces are present within the creative content, and the built and natural elements of the environment. This chapter explores notions of social encounters and content sharing in an urban village by analysing three different types of texts: the design of the built environment; content produced by residents through a geospatial web application; and, print and online media produced in digital storytelling workshops.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Alvin Toffler’s image of the prosumer (1970, 1980, 1990) continues to influence in a significant way our understanding of the user-led, collaborative processes of content creation which are today labelled “social media” or “Web 2.0”. A closer look at Toffler’s own description of his prosumer model reveals, however, that it remains firmly grounded in the mass media age: the prosumer is clearly not the self-motivated creative originator and developer of new content which can today be observed in projects ranging from open source software through Wikipedia to Second Life, but simply a particularly well-informed, and therefore both particularly critical and particularly active, consumer. The highly specialised, high end consumers which exist in areas such as hi-fi or car culture are far more representative of the ideal prosumer than the participants in non-commercial (or as yet non-commercial) collaborative projects. And to expect Toffler’s 1970s model of the prosumer to describe these 21st-century phenomena was always an unrealistic expectation, of course. To describe the creative and collaborative participation which today characterises user-led projects such as Wikipedia, terms such as ‘production’ and ‘consumption’ are no longer particularly useful – even in laboured constructions such as ‘commons-based peer-production’ (Benkler 2006) or ‘p2p production’ (Bauwens 2005). In the user communities participating in such forms of content creation, roles as consumers and users have long begun to be inextricably interwoven with those as producer and creator: users are always already also able to be producers of the shared information collection, regardless of whether they are aware of that fact – they have taken on a new, hybrid role which may be best described as that of a produser (Bruns 2008). Projects which build on such produsage can be found in areas from open source software development through citizen journalism to Wikipedia, and beyond this also in multi-user online computer games, filesharing, and even in communities collaborating on the design of material goods. While addressing a range of different challenges, they nonetheless build on a small number of universal key principles. This paper documents these principles and indicates the possible implications of this transition from production and prosumption to produsage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Angetrieben und unterstützt durch Web-2.0-Technologien, gibt es heute einen Trend zur Verbindung der Nutzung und Produktion von Inhalten als Produtzung (engl. produsage ). Um dabei die Qualität der erstellten Inhalte und eine nachhaltige Teilnahme der Nutzer sicherzustellen, müsen vier grundlegende Prinzipien eingehalten werden: * Größtmögliche Offenheit. * Ankurbeln der Gemeinschaft durch Inhalte und Hilfsmittel. * Unterstützung der Gruppendynamik und Abtretung von Verantwortung. * Keine Ausbeutung der Gemeinschaft und ihrer Arbeit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

China’s accession to the World Trade Organisation (WTO) has greatly enhanced global interest in investment in the Chinese media market, where demand for digital content is growing rapidly. The East Asian region is positioned as a growth area in many forms of digital content and digital service industries. China is attempting to catch up and take its place as a production centre to offset challenges from neighbouring countries. Meanwhile, Taiwan is seeking to use China both as an export market and as a production site for its digital content. This research investigates entry strategies of Taiwanese digital content firms into the Chinese market. By examining the strategies of a sample of Taiwan-based companies, this study also explores the evolution of their market strategies. However, the focus is on how distinctive business practices such as guanxi are important to Taiwanese business and to relations with Mainland China. This research examines how entrepreneurs manage the characteristics of digital content products and in turn how digital content entrepreneurs adapt to changing market circumstances. This project selected five Taiwan-based digital content companies that have business operations in China: Wang Film, Artkey, CnYES, Somode and iPartment. The study involved a field trip, undertaken between November 2006 and March 2007 to Shanghai and Taiwan to conduct interviews and to gather documentation and archival reports. Six senior managers and nine experts were interviewed. Data were analysed according to Miller’s firm-level entrepreneurship theory, foreign direct investment theory, Life Cycle Model and guanxi philosophy. Most studies of SMEs have focused on free market (capitalist) environments. In contrast, this thesis examines how Taiwanese digital content firms’ strategies apply in the Chinese market. I identified three main types of business strategy: cost-reduction, innovation and quality-enhancement; and four categories of functional strategies: product, marketing, resource acquisition and organizational restructuring. In this study, I introduce the concept of ‘entrepreneurial guanxi’, special relationships that imply mutual obligation, assurance and understanding to secure and exchange favors in entrepreneurial activities. While guanxi is a feature of many studies of business in Pan-Chinese society, it plays an important mediating role in digital content industries. In this thesis, I integrate the ‘Life Cycle Model’ with the dynamic concept of strategy. I outline the significant differences in the evolution of strategy between two types of digital content companies: off-line firms (Wang Film and Artkey) and web-based firms (CnYES, Somode and iPartment). Off-line digital content firms tended to adopt ‘resource acquisition strategies’ in their initial stages and ‘marketing strategies’ in second and subsequent stages. In contrast, web-based digital content companies mainly adopted product and marketing strategies in the early stages, and would adopt innovative approaches towards product and marketing strategies in the whole process of their business development. Some web-based digital content companies also adopted organizational restructuring strategies in the final stage. Finally, I propose the ‘Taxonomy Matrix of Entrepreneurial Strategies’ to emphasise the two dimensions of this matrix: innovation, and the firm’s resource acquisition for entrepreneurial strategy. This matrix is divided into four cells: Effective, Bounded, Conservative, and Impoverished.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Young drivers aged 17-24 are consistently overrepresented in motor vehicle crashes. Research has shown that a young driver’s crash risk increases when carrying similarly aged passengers, with fatal crash risk increasing two to three fold with two or more passengers. Recent growth in access to and use of the internet has led to a corresponding increase in the number of web based behaviour change interventions. An increasing body of literature describes the evaluation of web based programs targeting risk behaviours and health issues. Evaluations have shown promise for such strategies with evidence for positive changes in knowledge, attitudes and behaviour. The growing popularity of web based programs is due in part to their wide accessibility, ability for personalised tailoring of intervention messages, and self-direction and pacing of online content. Young people are also highly receptive to the internet and the interactive elements of online programs are particularly attractive. The current study was designed to assess the feasibility for a web based intervention to increase the use of personal and peer protective strategies among young adult passengers. An extensive review was conducted on the development and evaluation of web based programs. Year 12 students were also surveyed about their use of the internet in general and for health and road safety information. All students reported internet access at home or at school, and 74% had searched for road safety information. Additional findings have shown promise for the development of a web based passenger safety program for young adults. Design and methodological issues will be discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports results from a study in which we automatically classified the query reformulation patterns for 964,780 Web searching sessions (composed of 1,523,072 queries) in order to predict what the next query reformulation would be. We employed an n-gram modeling approach to describe the probability of searchers transitioning from one query reformulation state to another and predict their next state. We developed first, second, third, and fourth order models and evaluated each model for accuracy of prediction. Findings show that Reformulation and Assistance account for approximately 45 percent of all query reformulations. Searchers seem to seek system searching assistant early in the session or after a content change. The results of our evaluations show that the first and second order models provided the best predictability, between 28 and 40 percent overall, and higher than 70 percent for some patterns. Implications are that the n-gram approach can be used for improving searching systems and searching assistance in real time.