884 resultados para web content


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta dissertação aborda aspectos das práticas comunicacionais no contexto da comunicação de saúde. Como foco, os discursos instaurados nos Portais Nacionais das Sociedades Científicas Cardiol e Diabetes . O recorte temporal centrou-se no período de 1º de setembro a 1º de dezembro de 2008. A metodologia empregada é a qualitativa e deve-se, preferencialmente, ater ao texto, ao conteúdo latente (insinuado) e à linguagem manifesta. Verifica-se, também, a apresentação do layout e alguns tópicos de avaliação da usabilidade das páginas. O estudo é fundamentado na perspectiva da Análise de Discurso francesa (AD). Outras abordagens teóricas interdisciplinares também compõem as reflexões. Observa-se que a proposta de inserção de um discurso de prevenção de doenças e promoção de saúde, em seu sentido mais amplo, e nas atuais discussões, parece promissora para a descrição dessas representações nos diversos estágios de desenvolvimento humano e sociocultural. Há indícios de que a promoção da saúde amplia seu escopo e passa a relacionar vida, saúde, solidariedade, equidade, democracia, cidadania, desenvolvimento, participação e intenção de parceria com todos os indivíduos e segmentos. Os exemplares analisados indicam que nos enunciados, compreendidos como unidades reais da comunicação discursiva, os editores falam pelo especialista caracterizando, assim, também, como gênero científico.(AU)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main argument of this paper is that Natural Language Processing (NLP) does, and will continue to, underlie the Semantic Web (SW), including its initial construction from unstructured sources like the World Wide Web (WWW), whether its advocates realise this or not. Chiefly, we argue, such NLP activity is the only way up to a defensible notion of meaning at conceptual levels (in the original SW diagram) based on lower level empirical computations over usage. Our aim is definitely not to claim logic-bad, NLP-good in any simple-minded way, but to argue that the SW will be a fascinating interaction of these two methodologies, again like the WWW (which has been basically a field for statistical NLP research) but with deeper content. Only NLP technologies (and chiefly information extraction) will be able to provide the requisite RDF knowledge stores for the SW from existing unstructured text databases in the WWW, and in the vast quantities needed. There is no alternative at this point, since a wholly or mostly hand-crafted SW is also unthinkable, as is a SW built from scratch and without reference to the WWW. We also assume that, whatever the limitations on current SW representational power we have drawn attention to here, the SW will continue to grow in a distributed manner so as to serve the needs of scientists, even if it is not perfect. The WWW has already shown how an imperfect artefact can become indispensable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

E-atmospherics have been often analyzed in terms of functional features, leaving its characteristics' link to social capital co-creation as a fertile research area. Prior research have demonstrated the capacity of e-atmospherics' at modifying shopping habits towards deeper engagement. Little is known on how processes and cues emerging from the social aspects of lifestyle influence purchasing behavior. The anatomy of social dimension and ICT is the focus of this research, where attention is devoted to unpack the meanings and type of online mundane social capital creation. Taking a cross-product/services approach to better investigate social construction impact, our approach also involves both an emerging and a mature market where exploratory content analysis of landing page are done on Turkish and French web sites, respectively. We contend that by comprehending social capital, daily micro practices, habits and routine, a better and deeper understanding on e-atmospherics incumbent and potential effects on its multi-national e-customer will be acquired.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the recent rapid growth of the Semantic Web (SW), the processes of searching and querying content that is both massive in scale and heterogeneous have become increasingly challenging. User-friendly interfaces, which can support end users in querying and exploring this novel and diverse, structured information space, are needed to make the vision of the SW a reality. We present a survey on ontology-based Question Answering (QA), which has emerged in recent years to exploit the opportunities offered by structured semantic information on the Web. First, we provide a comprehensive perspective by analyzing the general background and history of the QA research field, from influential works from the artificial intelligence and database communities developed in the 70s and later decades, through open domain QA stimulated by the QA track in TREC since 1999, to the latest commercial semantic QA solutions, before tacking the current state of the art in open user-friendly interfaces for the SW. Second, we examine the potential of this technology to go beyond the current state of the art to support end-users in reusing and querying the SW content. We conclude our review with an outlook for this novel research area, focusing in particular on the R&D directions that need to be pursued to realize the goal of efficient and competent retrieval and integration of answers from large scale, heterogeneous, and continuously evolving semantic sources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work investigates the process of selecting, extracting and reorganizing content from Semantic Web information sources, to produce an ontology meeting the specifications of a particular domain and/or task. The process is combined with traditional text-based ontology learning methods to achieve tolerance to knowledge incompleteness. The paper describes the approach and presents experiments in which an ontology was built for a diet evaluation task. Although the example presented concerns the specific case of building a nutritional ontology, the methods employed are domain independent and transferrable to other use cases. © 2011 ACM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The usability of research papers on the Web would be enhanced by a system that explicitly modelled the rhetorical relations between claims in related papers. We describe ClaiMaker, a system for modelling readers’ interpretations of the core content of papers. ClaiMaker provides tools to build a Semantic Web representation of the claims in research papers using an ontology of relations. We demonstrate how the system can be used to make inter-document queries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The expansion of the Internet has made the task of searching a crucial one. Internet users, however, have to make a great effort in order to formulate a search query that returns the required results. Many methods have been devised to assist in this task by helping the users modify their query to give better results. In this paper we propose an interactive method for query expansion. It is based on the observation that documents are often found to contain terms with high information content, which can summarise their subject matter. We present experimental results, which demonstrate that our approach significantly shortens the time required in order to accomplish a certain task by performing web searches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the main characteristics of the world that we live in is the access to information and one of the main ways to reach the information is the Internet. Most Internet sites put accessibility problem on a secondary plan. If we try to define this concept (accessibility) we could say that accessibility it’s a way to offer access to information for the people with disabilities. For example blind people can’t navigate on the Internet like usual people. For that reason Internet sites have to put at their disposal ways to make their content known to this people. Accessibility does not refer only at blind people the web accessibility refers to all people who lost their ability to access the Internet sites. The web accessibility includes every disability that stops people with disabilities to access the web sites content like hearing disability, neurological and cognitive. People that have low speed Internet connection or with low performance computers can use the web accessibility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electronic publishing exploits numerous possibilities to present or exchange information and to communicate via most current media like the Internet. By utilizing modern Web technologies like Web Services, loosely coupled services, and peer-to-peer networks we describe the integration of an intelligent business news presentation and distribution network. Employing semantics technologies enables the coupling of multinational and multilingual business news data on a scalable international level and thus introduce a service quality that is not achieved by alternative technologies in the news distribution area so far. Architecturally, we identified the loose coupling of existing services as the most feasible way to address multinational and multilingual news presentation and distribution networks. Furthermore we semantically enrich multinational news contents by relating them using AI techniques like the Vector Space Model. Summarizing our experiences we describe the technical integration of semantics and communication technologies in order to create a modern international news network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the development of the Internet culture applications are becoming simpler and simpler, users need less IT knowledge than earlier; from the ‘reader’ status they have reached that of the content creator and editor. In our days, the effects of the web are becoming stronger and stronger— computer-aided work is conventional almost everywhere. The spread of the Internet applications has several reasons: first of all, their accessibility is widespread; second, their use is not limited to only one computer or network on which they have been installed. Also, the quantity of accessible information now and earlier is not even comparable. Not counting the applications which need high broadband or high counting capacity (for example video editing), Internet applications are reaching the functionality of the thick clients associates. The most serious disadvantage of Internet applications – for security reasons — is that the resources of the client computer are not fully accessible or accessible only to a restricted extent. Still thick clients do have some advantages: better multimedia perdormance with more flexibility due to local resources and the possibility for offline working.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Malapropism is a semantic error that is hardly detectable because it usually retains syntactical links between words in the sentence but replaces one content word by a similar word with quite different meaning. A method of automatic detection of malapropisms is described, based on Web statistics and a specially defined Semantic Compatibility Index (SCI). For correction of the detected errors, special dictionaries and heuristic rules are proposed, which retains only a few highly SCI-ranked correction candidates for the user’s selection. Experiments on Web-assisted detection and correction of Russian malapropisms are reported, demonstrating efficacy of the described method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the rapid growth of the number of digital media elements like image, video, audio, graphics on Internet, there is an increasing demand for effective search and retrieval techniques. Recently, many search engines have made image search as an option like Google, AlltheWeb, AltaVista, Freenet. In addition to this, Ditto, Picsearch, can search only the images on Internet. There are also other domain specific search engines available for graphics and clip art, audio, video, educational images, artwork, stock photos, science and nature [www.faganfinder.com/img]. These entire search engines are directory based. They crawls the entire Internet and index all the images in certain categories. They do not display the images in any particular order with respect to the time and context. With the availability of MPEG-7, a standard for describing multimedia content, it is now possible to store the images with its metadata in a structured format. This helps in searching and retrieving the images. The MPEG-7 standard uses XML to describe the content of multimedia information objects. These objects will have metadata information in the form of MPEG-7 or any other similar format associated with them. It can be used in different ways to search the objects. In this paper we propose a system, which can do content based image retrieval on the World Wide Web. It displays the result in user-defined order.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our research explores the possibility of categorizing webpages and webpage genre by structure or layout. Based on our results, we believe that webpage structure could play an important role, along with textual and visual keywords, in webpage categorization and searching.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper the key features of a two-layered model for describing the semantic of dynamical web resources are introduced. In the current Semantic Web proposal [Berners-Lee et al., 2001] web resources are classified into static ontologies which describes the semantic network of their inter-relationships [Kalianpur, 2001][Handschuh & Staab, 2002] and complex constraints described by logical quantified formula [Boley et al., 2001][McGuinnes & van Harmelen, 2004][McGuinnes et al., 2004], the basic idea is that software agents can use techniques of automatic reasoning in order to relate resources and to support sophisticated web application. On the other hand, web resources are also characterized by their dynamical aspects, which are not adequately addressed by current web models. Resources on the web are dynamical since, in the minimal case, they can appear or disappear from the web and their content is upgraded. In addition, resources can traverse different states, which characterized the resource life-cycle, each resource state corresponding to different possible uses of the resource. Finally most resources are timed, i.e. they information they provide make sense only if contextualised with respect to time, and their validity and accuracy is greatly bounded by time. Temporal projection and deduction based on dynamical and time constraints of the resources can be made and exploited by software agents [Hendler, 2001] in order to make previsions about the availability and the state of a resource, for deciding when consulting the resource itself or in order to deliberately induce a resource state change for reaching some agent goal, such as in the automated planning framework [Fikes & Nilsson, 1971][Bacchus & Kabanza,1998].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Search engines sometimes apply the search on the full text of documents or web-pages; but sometimes they can apply the search on selected parts of the documents only, e.g. their titles. Full-text search may consume a lot of computing resources and time. It may be possible to save resources by applying the search on the titles of documents only, assuming that a title of a document provides a concise representation of its content. We tested this assumption using Google search engine. We ran search queries that have been defined by users, distinguishing between two types of queries/users: queries of users who are familiar with the area of the search, and queries of users who are not familiar with the area of the search. We found that searches which use titles provide similar and sometimes even (slightly) better results compared to searches which use the full-text. These results hold for both types of queries/users. Moreover, we found an advantage in title-search when searching in unfamiliar areas because the general terms used in queries in unfamiliar areas match better with general terms which tend to be used in document titles.