941 resultados para web content
Resumo:
Current model-driven Web Engineering approaches (such as OO-H, UWE or WebML) provide a set of methods and supporting tools for a systematic design and development of Web applications. Each method addresses different concerns using separate models (content, navigation, presentation, business logic, etc.), and provide model compilers that produce most of the logic and Web pages of the application from these models. However, these proposals also have some limitations, especially for exchanging models or representing further modeling concerns, such as architectural styles, technology independence, or distribution. A possible solution to these issues is provided by making model-driven Web Engineering proposals interoperate, being able to complement each other, and to exchange models between the different tools. MDWEnet is a recent initiative started by a small group of researchers working on model-driven Web Engineering (MDWE). Its goal is to improve current practices and tools for the model-driven development of Web applications for better interoperability. The proposal is based on the strengths of current model-driven Web Engineering methods, and the existing experience and knowledge in the field. This paper presents the background, motivation, scope, and objectives of MDWEnet. Furthermore, it reports on the MDWEnet results and achievements so far, and its future plan of actions.
Resumo:
Utilizar Internet como medio de aprendizaje, donde se crean, comparten y encuentran infinidad de recursos destinados a la educación es una realidad que se consolida cada día. En este artículo presentamos la evolución, en cuanto a características, uso y difusión, de la plataforma para el aprendizaje on-line EDUTIC-WQ. Esta plataforma, que proporciona una aplicación online para crear, diseñar, compartir y consultar WebQuests, fue creada como herramienta de autor en 2004 en el seno del grupo de investigación EDUTIC-ADEI de la Universidad de Alicante y actualmente ha sido orientada hacia la filosofía de la Web 2.0. EDUTIC-WQ ha superado con creces todas las expectativas de uso y consulta: aproximadamente 4 millones de páginas consultadas en 2011 y ha servido 647 Gigabytes de información consultada por más de 1 millón de personas distintas.
Resumo:
Introducción: Analizar la calidad de las páginas web de los servicios de catering en el ámbito escolar y su contenido en educación alimentaria, y tener una primera experiencia con la herramienta de evaluación EDALCAT. Material y métodos: Estudio descriptivo transversal. La población de estudio son páginas web de empresas de catering encargadas de la gestión de los comedores escolares. La muestra se obtuvo utilizando el buscador Google y un Ranking de las principales empresas de catering por facturación, escogiendo aquellas que tenían página web. Para la prueba piloto se seleccionaron diez páginas web según proximidad geográfica a la ciudad de Alicante y nivel de facturación. Para la evaluación de los sitios web se diseñó un cuestionario (EDALCAT), compuesto de un primer bloque de predictores de calidad con 19 variables de fiabilidad, diseño y navegación; y de un segundo bloque de contenidos específicos de educación alimentaria con 19 variables de contenido y actividades educativas. Resultados: Se han obtenido resultados positivos en 31 de las 38 variables del cuestionario, excepto en los ítems: “Buscador”, “Idioma” (40%) y “Ayuda” (10%) del bloque predictores de calidad y en los ítems: “Talleres”, “Recetario”, “Web alimentación-nutrición” (40%) y “Ejemplos” (30%) del bloque de contenidos específicos de educación alimentaria. Todas las páginas web evaluadas superan valores del 50% de cumplimiento de criterios de calidad y de contenidos mínimos en educación alimentaria, y sólo una de ellas, incumple el nivel de actividad mínimo establecido. Conclusiones: Los predictores de calidad y los contenidos específicos en educación alimentaria dieron buenos resultados en todas las páginas web evaluadas. La mayoría de ellas obtuvieron una alta puntuación en su valoración, y en su análisis individual por bloques. Tras el estudio piloto el cuestionario se ha modificado y se obtiene el EDALCAT definitivo. En líneas generales EDALCAT parece ser adecuado para evaluar la calidad de las páginas web de servicios de catering y su contenido en educación alimentaria, sin embargo el presente estudio no puede considerarse como validación del mismo.
Resumo:
Antena 3 es una de las cadenas de televisión más importantes en España y es la cadena española pionera en la apuesta por las nuevas tecnologías. Por ello, es de interés analizar su web para determinar su oferta de contenidos y las herramientas utilizadas para interactuar con sus públicos identificando, además, el uso que hace de las redes sociales. Con este objetivo, se realiza un análisis cualitativo del sitio web centrado en analizar veintiuna variables focalizadas en los campos descritos (contenidos, interactividad y redes sociales). Estas han sido establecidas a partir de la combinación de parámetros formulados en la propuesta de análisis de webs de televisión de Codina, Aubia y Sánchez (2008) y los indicadores de interacción descritos por Rodríguez-Martínez, Codina y Pedraza-Jiménez (2012). Gracias a ellas, es posible determinar la carencia de Antena3 en la gestión de sus perfiles sociales y en la interactividad generada mediante la comunidad virtual que conforma a través de su espacio web. No obstante, estas variables también justifican cómo, a pesar de esto, antena3.com es un espacio en el que la cadena pone a disposición del usuario una completa y bien organizada oferta de contenidos apostando así, por la difusión de estos a través de Internet.
Resumo:
This study establishes a bridge between Web 2.0 and Crowdfunding. It shows that there is a relation between creation of content and the money collected, using a dataset of campaigns from the Kickstarter platform. Besides this, the study explores the comprehension of the society to these matters. A survey was made in a Higher Education Institution to evaluate if there is an awareness of the society to matters such as crowdfunding and Web 2.0. The study started with a literature review that sustains this theory followed by the creation of two case studies. One case study made a model that explained relation between Web 2.0 and a crowdfunding campaigns and another study that studies the awareness of the society to matters such as crowdfunding and Web 2.0. Interesting conclusions were found, showing that these subjects are still giving the first baby steps and there is relation between some creations of contents, through Web 2.0, and the money collected in a crowdfunding campaign.
Resumo:
The vision presented in this paper and its technical content are a result of close collaboration between several researchers from the University of Queensland, Australia and the SAP Corporate Research Center, Brisbane, Australia. In particular; Dr Wasim Sadiq (SAP), Dr Shazia Sadiq (UQ), and Dr Karsten Schultz (SAP) are the prime contributors to the ideas presented. Also, PhD students Mr Dat Ma Cao and Ms Belinda Carter are involved in the research program. Additionally, the Australian Research Council Discovery Project Scheme and Australian Research Council Linkage Project Scheme support some aspects of research work towards the HMT solution.
Resumo:
Collaborative recommendation is one of widely used recommendation systems, which recommend items to visitor on a basis of referring other's preference that is similar to current user. User profiling technique upon Web transaction data is able to capture such informative knowledge of user task or interest. With the discovered usage pattern information, it is likely to recommend Web users more preferred content or customize the Web presentation to visitors via collaborative recommendation. In addition, it is helpful to identify the underlying relationships among Web users, items as well as latent tasks during Web mining period. In this paper, we propose a Web recommendation framework based on user profiling technique. In this approach, we employ Probabilistic Latent Semantic Analysis (PLSA) to model the co-occurrence activities and develop a modified k-means clustering algorithm to build user profiles as the representatives of usage patterns. Moreover, the hidden task model is derived by characterizing the meaningful latent factor space. With the discovered user profiles, we then choose the most matched profile, which possesses the closely similar preference to current user and make collaborative recommendation based on the corresponding page weights appeared in the selected user profile. The preliminary experimental results performed on real world data sets show that the proposed approach is capable of making recommendation accurately and efficiently.
Resumo:
A location-based search engine must be able to find and assign proper locations to Web resources. Host, content and metadata location information are not sufficient to describe the location of resources as they are ambiguous or unavailable for many documents. We introduce target location as the location of users of Web resources. Target location is content-independent and can be applied to all types of Web resources. A novel method is introduced which uses log files and IN to track the visitors of websites. The experiments show that target location can be calculated for almost all documents on the Web at country level and to the majority of them in state and city levels. It can be assigned to Web resources as a new definition and dimension of location. It can be used separately or with other relevant locations to define the geography of Web resources. This compensates insufficient geographical information on Web resources and would facilitate the design and development of location-based search engines.
Resumo:
Esta dissertação aborda aspectos das práticas comunicacionais no contexto da comunicação de saúde. Como foco, os discursos instaurados nos Portais Nacionais das Sociedades Científicas Cardiol e Diabetes . O recorte temporal centrou-se no período de 1º de setembro a 1º de dezembro de 2008. A metodologia empregada é a qualitativa e deve-se, preferencialmente, ater ao texto, ao conteúdo latente (insinuado) e à linguagem manifesta. Verifica-se, também, a apresentação do layout e alguns tópicos de avaliação da usabilidade das páginas. O estudo é fundamentado na perspectiva da Análise de Discurso francesa (AD). Outras abordagens teóricas interdisciplinares também compõem as reflexões. Observa-se que a proposta de inserção de um discurso de prevenção de doenças e promoção de saúde, em seu sentido mais amplo, e nas atuais discussões, parece promissora para a descrição dessas representações nos diversos estágios de desenvolvimento humano e sociocultural. Há indícios de que a promoção da saúde amplia seu escopo e passa a relacionar vida, saúde, solidariedade, equidade, democracia, cidadania, desenvolvimento, participação e intenção de parceria com todos os indivíduos e segmentos. Os exemplares analisados indicam que nos enunciados, compreendidos como unidades reais da comunicação discursiva, os editores falam pelo especialista caracterizando, assim, também, como gênero científico.(AU)
Resumo:
The main argument of this paper is that Natural Language Processing (NLP) does, and will continue to, underlie the Semantic Web (SW), including its initial construction from unstructured sources like the World Wide Web (WWW), whether its advocates realise this or not. Chiefly, we argue, such NLP activity is the only way up to a defensible notion of meaning at conceptual levels (in the original SW diagram) based on lower level empirical computations over usage. Our aim is definitely not to claim logic-bad, NLP-good in any simple-minded way, but to argue that the SW will be a fascinating interaction of these two methodologies, again like the WWW (which has been basically a field for statistical NLP research) but with deeper content. Only NLP technologies (and chiefly information extraction) will be able to provide the requisite RDF knowledge stores for the SW from existing unstructured text databases in the WWW, and in the vast quantities needed. There is no alternative at this point, since a wholly or mostly hand-crafted SW is also unthinkable, as is a SW built from scratch and without reference to the WWW. We also assume that, whatever the limitations on current SW representational power we have drawn attention to here, the SW will continue to grow in a distributed manner so as to serve the needs of scientists, even if it is not perfect. The WWW has already shown how an imperfect artefact can become indispensable.
Resumo:
E-atmospherics have been often analyzed in terms of functional features, leaving its characteristics' link to social capital co-creation as a fertile research area. Prior research have demonstrated the capacity of e-atmospherics' at modifying shopping habits towards deeper engagement. Little is known on how processes and cues emerging from the social aspects of lifestyle influence purchasing behavior. The anatomy of social dimension and ICT is the focus of this research, where attention is devoted to unpack the meanings and type of online mundane social capital creation. Taking a cross-product/services approach to better investigate social construction impact, our approach also involves both an emerging and a mature market where exploratory content analysis of landing page are done on Turkish and French web sites, respectively. We contend that by comprehending social capital, daily micro practices, habits and routine, a better and deeper understanding on e-atmospherics incumbent and potential effects on its multi-national e-customer will be acquired.
Resumo:
With the recent rapid growth of the Semantic Web (SW), the processes of searching and querying content that is both massive in scale and heterogeneous have become increasingly challenging. User-friendly interfaces, which can support end users in querying and exploring this novel and diverse, structured information space, are needed to make the vision of the SW a reality. We present a survey on ontology-based Question Answering (QA), which has emerged in recent years to exploit the opportunities offered by structured semantic information on the Web. First, we provide a comprehensive perspective by analyzing the general background and history of the QA research field, from influential works from the artificial intelligence and database communities developed in the 70s and later decades, through open domain QA stimulated by the QA track in TREC since 1999, to the latest commercial semantic QA solutions, before tacking the current state of the art in open user-friendly interfaces for the SW. Second, we examine the potential of this technology to go beyond the current state of the art to support end-users in reusing and querying the SW content. We conclude our review with an outlook for this novel research area, focusing in particular on the R&D directions that need to be pursued to realize the goal of efficient and competent retrieval and integration of answers from large scale, heterogeneous, and continuously evolving semantic sources.
Resumo:
This work investigates the process of selecting, extracting and reorganizing content from Semantic Web information sources, to produce an ontology meeting the specifications of a particular domain and/or task. The process is combined with traditional text-based ontology learning methods to achieve tolerance to knowledge incompleteness. The paper describes the approach and presents experiments in which an ontology was built for a diet evaluation task. Although the example presented concerns the specific case of building a nutritional ontology, the methods employed are domain independent and transferrable to other use cases. © 2011 ACM.
Resumo:
The usability of research papers on the Web would be enhanced by a system that explicitly modelled the rhetorical relations between claims in related papers. We describe ClaiMaker, a system for modelling readers’ interpretations of the core content of papers. ClaiMaker provides tools to build a Semantic Web representation of the claims in research papers using an ontology of relations. We demonstrate how the system can be used to make inter-document queries.
Resumo:
The expansion of the Internet has made the task of searching a crucial one. Internet users, however, have to make a great effort in order to formulate a search query that returns the required results. Many methods have been devised to assist in this task by helping the users modify their query to give better results. In this paper we propose an interactive method for query expansion. It is based on the observation that documents are often found to contain terms with high information content, which can summarise their subject matter. We present experimental results, which demonstrate that our approach significantly shortens the time required in order to accomplish a certain task by performing web searches.