941 resultados para web content


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This case study aims at identifying how a community of secondary school students selects web-based information and factors associated with the reliability of online reference sources during their collaborative inquiry (co-inquiry) projects. This study, conducted in a public secondary school in Brazil, focused on information literacy skills for collaborative open learning (colearning). The research is based on qualitative content analysis implemented on the online platform weSPOT. Although students are mindful of the importance of comparing different sources of information they seem not to be aware of reliability in online environments. Teacher's guidance is essential to support co-learners in developing competences, particularly related to critical thinking.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Considering the context that integrates the internet and the information, the present paper has aimed to analyze the content from the juridical portal named Migalhas, more specifically the daily information–newsletter that is sent to its readers. Starting by the passage on the pathway of the internet, cyber culture and web journalism, and some concepts of news production, it is aimed to describe and evaluate, from the content of the proposed analyzes by Laurence Bardin, about general aspects, strategies and bulletin samples. Bringing a little of its history and description of the journalistic and news main points. The present paper approaches how these criteria, news value, and tools are chosen and used to reach the effectiveness of the proposal to take specific and fast information to the readers. Questions regarded to the opinionated character from the content were also stated as a way to evaluate its expressiveness

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Televisão Digital: Informação e Conhecimento - FAAC

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The exponential growth of the Internet, coupled with the increasing popularity of dynamically generated content on the World Wide Web, has created the need for more and faster Web servers capable of serving the over 100 million Internet users. Server clustering has emerged as a promising technique to build scalable Web servers. In this article we examine the seminal work, early products, and a sample of contemporary commercial offerings in the field of transparent Web server clustering. We broadly classify transparent server clustering into three categories.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

End-user programmers are increasingly relying on web authoring environments to create web applications. Although often consisting primarily of web pages, such applications are increasingly going further, harnessing the content available on the web through “programs” that query other web applications for information to drive other tasks. Unfortunately, errors can be pervasive in web applications, impacting their dependability. This paper reports the results of an exploratory study of end-user web application developers, performed with the aim of exposing prevalent classes of errors. The results suggest that end-users struggle the most with the identification and manipulation of variables when structuring requests to obtain data from other web sites. To address this problem, we present a family of techniques that help end user programmers perform this task, reducing possible sources of error. The techniques focus on simplification and characterization of the data that end-users must analyze while developing their web applications. We report the results of an empirical study in which these techniques are applied to several popular web sites. Our results reveal several potential benefits for end-users who wish to “engineer” dependable web applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ubiquitous Computing promises seamless access to a wide range of applications and Internet based services from anywhere, at anytime, and using any device. In this scenario, new challenges for the practice of software development arise: Applications and services must keep a coherent behavior, a proper appearance, and must adapt to a plenty of contextual usage requirements and hardware aspects. Especially, due to its interactive nature, the interface content of Web applications must adapt to a large diversity of devices and contexts. In order to overcome such obstacles, this work introduces an innovative methodology for content adaptation of Web 2.0 interfaces. The basis of our work is to combine static adaption - the implementation of static Web interfaces; and dynamic adaptation - the alteration, during execution time, of static interfaces so as for adapting to different contexts of use. In hybrid fashion, our methodology benefits from the advantages of both adaptation strategies - static and dynamic. In this line, we designed and implemented UbiCon, a framework over which we tested our concepts through a case study and through a development experiment. Our results show that the hybrid methodology over UbiCon leads to broader and more accessible interfaces, and to faster and less costly software development. We believe that the UbiCon hybrid methodology can foster more efficient and accurate interface engineering in the industry and in the academy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O presente trabalho tem como objetivo mostrar como as técnicas da Inteligência Competitiva podem ser adaptadas para o ambiente de serviços de informação, apresentando um projeto de monitoramento web de bibliotecas universitárias especializadas na ár ea de Química como estratégia para a melhoria contínua desses ser viços, através da comparação de serviços de informação análogos, selecionados entre as quatro primeiras instituições classificadas no Webometrics - Ranking Web of World Universities , fornecendo dados para o incremento e atualização dos conteúdos informaciona is disponíveis na página virtual de bibliotecas dessa área, melhorando seu acesso e dis ponibilização de informação, bem como contribuindo para a maximização da visibilidade e a valiação da instituição universitária. Palavras-Chave: Inteligência Competitiva, Monitoramento Web, Bibli otecas Universitárias e especializadas, Página Virtual, Serviços de Informa ção

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acessível ao público desde junho de 2009, a Biblioteca Brasiliana Digital, da Universidade de São Paulo tem por objetivo facultar para a pesquisa, a maior Brasiliana custodiada por uma universidade. Sua intenção é disponibilizar virtualmente parte do acervo da Universidade oferecendo-se como um instrumento útil e funcional para a pesquisa e o estudo dos temas e cultura brasileiros, além de oferecer um modelo tecnológico de gestão que possa ser difundido a outras coleções, acervos e instituições. Este trabalho apresenta os resultado da implantação de um esquema de metadados baseado no formato Dublin Core, para a descrição de obras raras e especiais na web. Especificamente, apresenta os procedimentos e processos de descrição de conteúdos das diversas tipologias documentais (livros, periódicos, gravuras etc.) e formatos digitais (pdf, jpeg entre outros). Palavras-Chave: Bibliotecas digitais; Metadados; Dublin Core.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[ES] El proyecto consiste en establecer la comunicación entre dos portales web a través de servicios web Restful implementados en PHP. Ambos portales están relacionados con el mundo del cine. El segundo de estos es, a grandes rasgos, una interfaz simplificada del anterior. El primer portal web está construido sobre Drupal 7, en este instalamos una serie de módulos que nos permiten gestionar el contenido que se quiere mostrar a los usuarios. Un usuario que no se identifica podrá navegar por todas las páginas del portal, identificarse y registrarse. Los privilegios que se le conceden a un usuario cuando se identifica son los de participar en el sistema de votación de las películas e interactuar con otros usuarios identificados a través de un sistema de comentarios. El usuario administrador, además, puede gestionar el contenido y a los usuarios identificados. El segundo portal está orientado al disfrute de la página a través de dispositivos móviles. Un usuario que no se ha identificado puede navegar por todas las áreas de éste del mismo modo que un usuario identificado. En este portal, a diferencia del anterior, no es posible registrarse para este tipo de actores. La diferencia entre el usuario no identificado y el identificado, en este caso, es que este último al visualizar el catálogo observará un descuento sobre cada película. Los servicios web, a través de peticiones GET y POST, proporcionarán a los usuarios una rica experiencia de navegación. Gracias a estos, en el segundo portal, podrán identificarse, obtener el catálogo de películas (además de ordenarlo y establecer filtros de búsqueda por género), y visualizar la ficha de las películas y directores. Todo esto sin necesidad de crear otra base de datos, tan solo intercambiando datos con el servidor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[ES] Este Trabajo de Fin de Grado es un servicio basado en tecnologías web (PHP, HTML5, CSS, JQUERY y AJAX). El objetivo principal es ofrecer un servicio de creación y gestión de actas para el Ayuntamiento de Las Palmas de Gran Canaria. Para ello, consta de dos módulos principales, uno para “crear actas” y otro para “editar actas”. La aplicación consta de dos partes. Una primera parte desarrollada por mí, que consiste en primer lugar en todas las reuniones que fueron necesarias con el personal del Ayuntamiento de Las Palmas de Gran Canaria para entender sus necesidades y cómo poder afrontarlas como desarrollador. Y en segundo lugar, me he encargado de la elaboración y la estructura de la página web, mediante la generación de los distintos ficheros con contenido  HTML, en la interconexión de estos ficheros y en el paso de parámetros entre dichos ficheros mediante las distintas herramientas (JQUERY, AJAX), así como también he dotado a la web de todo el contenido JavaScript necesario. En este apartado también se encuentra la tarea de realizar un módulo de búsqueda y un módulo para mostrar las actas ya acabadas. El de búsqueda contiene un formulario con un campo de búsqueda y busca las coincidencias dentro de todos los ficheros que se han generado con la aplicación. También muestra un link para abrir ese fichero desde el navegador. Como aportación adicional también me he encargado de la configuración y generación de las tablas necesarias de la base de datos para el funcionamiento de la aplicación.


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The wide use of e-technologies represents a great opportunity for underserved segments of the population, especially with the aim of reintegrating excluded individuals back into society through education. This is particularly true for people with different types of disabilities who may have difficulties while attending traditional on-site learning programs that are typically based on printed learning resources. The creation and provision of accessible e-learning contents may therefore become a key factor in enabling people with different access needs to enjoy quality learning experiences and services. Another e-learning challenge is represented by m-learning (which stands for mobile learning), which is emerging as a consequence of mobile terminals diffusion and provides the opportunity to browse didactical materials everywhere, outside places that are traditionally devoted to education. Both such situations share the need to access materials in limited conditions and collide with the growing use of rich media in didactical contents, which are designed to be enjoyed without any restriction. Nowadays, Web-based teaching makes great use of multimedia technologies, ranging from Flash animations to prerecorded video-lectures. Rich media in e-learning can offer significant potential in enhancing the learning environment, through helping to increase access to education, enhance the learning experience and support multiple learning styles. Moreover, they can often be used to improve the structure of Web-based courses. These highly variegated and structured contents may significantly improve the quality and the effectiveness of educational activities for learners. For example, rich media contents allow us to describe complex concepts and process flows. Audio and video elements may be utilized to add a “human touch” to distance-learning courses. Finally, real lectures may be recorded and distributed to integrate or enrich on line materials. A confirmation of the advantages of these approaches can be seen in the exponential growth of video-lecture availability on the net, due to the ease of recording and delivering activities which take place in a traditional classroom. Furthermore, the wide use of assistive technologies for learners with disabilities injects new life into e-learning systems. E-learning allows distance and flexible educational activities, thus helping disabled learners to access resources which would otherwise present significant barriers for them. For instance, students with visual impairments have difficulties in reading traditional visual materials, deaf learners have trouble in following traditional (spoken) lectures, people with motion disabilities have problems in attending on-site programs. As already mentioned, the use of wireless technologies and pervasive computing may really enhance the educational learner experience by offering mobile e-learning services that can be accessed by handheld devices. This new paradigm of educational content distribution maximizes the benefits for learners since it enables users to overcome constraints imposed by the surrounding environment. While certainly helpful for users without disabilities, we believe that the use of newmobile technologies may also become a fundamental tool for impaired learners, since it frees them from sitting in front of a PC. In this way, educational activities can be enjoyed by all the users, without hindrance, thus increasing the social inclusion of non-typical learners. While the provision of fully accessible and portable video-lectures may be extremely useful for students, it is widely recognized that structuring and managing rich media contents for mobile learning services are complex and expensive tasks. Indeed, major difficulties originate from the basic need to provide a textual equivalent for each media resource composing a rich media Learning Object (LO). Moreover, tests need to be carried out to establish whether a given LO is fully accessible to all kinds of learners. Unfortunately, both these tasks are truly time-consuming processes, depending on the type of contents the teacher is writing and on the authoring tool he/she is using. Due to these difficulties, online LOs are often distributed as partially accessible or totally inaccessible content. Bearing this in mind, this thesis aims to discuss the key issues of a system we have developed to deliver accessible, customized or nomadic learning experiences to learners with different access needs and skills. To reduce the risk of excluding users with particular access capabilities, our system exploits Learning Objects (LOs) which are dynamically adapted and transcoded based on the specific needs of non-typical users and on the barriers that they can encounter in the environment. The basic idea is to dynamically adapt contents, by selecting them from a set of media resources packaged in SCORM-compliant LOs and stored in a self-adapting format. The system schedules and orchestrates a set of transcoding processes based on specific learner needs, so as to produce a customized LO that can be fully enjoyed by any (impaired or mobile) student.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Except the article forming the main content most HTML documents on the WWW contain additional contents such as navigation menus, design elements or commercial banners. In the context of several applications it is necessary to draw the distinction between main and additional content automatically. Content extraction and template detection are the two approaches to solve this task. This thesis gives an extensive overview of existing algorithms from both areas. It contributes an objective way to measure and evaluate the performance of content extraction algorithms under different aspects. These evaluation measures allow to draw the first objective comparison of existing extraction solutions. The newly introduced content code blurring algorithm overcomes several drawbacks of previous approaches and proves to be the best content extraction algorithm at the moment. An analysis of methods to cluster web documents according to their underlying templates is the third major contribution of this thesis. In combination with a localised crawling process this clustering analysis can be used to automatically create sets of training documents for template detection algorithms. As the whole process can be automated it allows to perform template detection on a single document, thereby combining the advantages of single and multi document algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, the author presents a query language for an RDF (Resource Description Framework) database and discusses its applications in the context of the HELM project (the Hypertextual Electronic Library of Mathematics). This language aims at meeting the main requirements coming from the RDF community. in particular it includes: a human readable textual syntax and a machine-processable XML (Extensible Markup Language) syntax both for queries and for query results, a rigorously exposed formal semantics, a graph-oriented RDF data access model capable of exploring an entire RDF graph (including both RDF Models and RDF Schemata), a full set of Boolean operators to compose the query constraints, fully customizable and highly structured query results having a 4-dimensional geometry, some constructions taken from ordinary programming languages that simplify the formulation of complex queries. The HELM project aims at integrating the modern tools for the automation of formal reasoning with the most recent electronic publishing technologies, in order create and maintain a hypertextual, distributed virtual library of formal mathematical knowledge. In the spirit of the Semantic Web, the documents of this library include RDF metadata describing their structure and content in a machine-understandable form. Using the author's query engine, HELM exploits this information to implement some functionalities allowing the interactive and automatic retrieval of documents on the basis of content-aware requests that take into account the mathematical nature of these documents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ein Tag ohne Internet ist für viele kaum vorstellbar. Das Spektrum der Internetnutzer ist breiter geworden und damit sind die Ansprüche an die Websites massiv angestiegen. Die Entscheidung auf einer Website zu bleiben oder auf einer anderen zu suchen fällt innerhalb von wenigen Sekunden. Diese Entscheidung ist sowohl vom Website-Design als auch von dem dargestellten Inhalt abhängig. Die Auswertung, wie schnell der Benutzer Online-Informationen finden und wie einfach er sie verstehen kann, ist die Aufgabe von Web-Usability-Testing. Für das Finden und Verstehen von Informationen sind die computertechnischen zusammen mit den linguistischen Aspekten zuständig. In der Usability-Forschung liegt jedoch der Fokus bislang weitgehend auf der Bewertung der computer¬linguistischen und ästhetischen Aspekte der Websites. In den Hintergrund gedrängt wurden dabei die linguistischen Aspekte. Im Vergleich sind diese weniger systematisch erforscht und in Usability-Richtlinien kaum zu finden. Stattdessen stößt man überwiegend auf allgemeine Empfehlungen. Motiviert davon hat die vorliegende Arbeit das Ziel, Die Web-Usability systematisch sowohl aus linguistischer als auch aus formaler Sicht zu erforschen. Auf linguistischer Ebene wurde in Anlehnung an die Zeichentheorie von Morris die Web-Usability analysiert und der Begriff Linguistische Web-Usability eingeführt. Auf Basis dieser Analyse sowie einer literaturstudie ‘literature review’ mehrerer Usability-Richtlinien wurde ein Kriterienkatalog entwickelt. Bei der Umsetzung dieses Kriterienkatalogs im Rahmen einer Usability-Studie wurde die Website der Universität Johannes Gutenberg-Universität Mainz (JGU) im Usability-Labor unter Anwendung der Methode Eye-Tracking zusammen mit der Think-Aloud-Methode und der Retrospective-Think-Aloud-Methode getestet. Die empirischen Ergebnisse zeigen, dass die linguistischen Usability-Probleme genau wie die formalen die Benutzer hindern, die gesuchten Informationen zu finden, oder zumindest ihre Suche verlangsamen. Dementsprechend sollten die linguistischen Perspektiven in die Usability-Richtlinien miteinbezogen werden.