884 resultados para web content
Resumo:
[ES] El proyecto consiste en establecer la comunicación entre dos portales web a través de servicios web Restful implementados en PHP. Ambos portales están relacionados con el mundo del cine. El segundo de estos es, a grandes rasgos, una interfaz simplificada del anterior. El primer portal web está construido sobre Drupal 7, en este instalamos una serie de módulos que nos permiten gestionar el contenido que se quiere mostrar a los usuarios. Un usuario que no se identifica podrá navegar por todas las páginas del portal, identificarse y registrarse. Los privilegios que se le conceden a un usuario cuando se identifica son los de participar en el sistema de votación de las películas e interactuar con otros usuarios identificados a través de un sistema de comentarios. El usuario administrador, además, puede gestionar el contenido y a los usuarios identificados. El segundo portal está orientado al disfrute de la página a través de dispositivos móviles. Un usuario que no se ha identificado puede navegar por todas las áreas de éste del mismo modo que un usuario identificado. En este portal, a diferencia del anterior, no es posible registrarse para este tipo de actores. La diferencia entre el usuario no identificado y el identificado, en este caso, es que este último al visualizar el catálogo observará un descuento sobre cada película. Los servicios web, a través de peticiones GET y POST, proporcionarán a los usuarios una rica experiencia de navegación. Gracias a estos, en el segundo portal, podrán identificarse, obtener el catálogo de películas (además de ordenarlo y establecer filtros de búsqueda por género), y visualizar la ficha de las películas y directores. Todo esto sin necesidad de crear otra base de datos, tan solo intercambiando datos con el servidor.
Resumo:
[ES] Este Trabajo de Fin de Grado es un servicio basado en tecnologías web (PHP, HTML5, CSS, JQUERY y AJAX). El objetivo principal es ofrecer un servicio de creación y gestión de actas para el Ayuntamiento de Las Palmas de Gran Canaria. Para ello, consta de dos módulos principales, uno para “crear actas” y otro para “editar actas”. La aplicación consta de dos partes. Una primera parte desarrollada por mí, que consiste en primer lugar en todas las reuniones que fueron necesarias con el personal del Ayuntamiento de Las Palmas de Gran Canaria para entender sus necesidades y cómo poder afrontarlas como desarrollador. Y en segundo lugar, me he encargado de la elaboración y la estructura de la página web, mediante la generación de los distintos ficheros con contenido HTML, en la interconexión de estos ficheros y en el paso de parámetros entre dichos ficheros mediante las distintas herramientas (JQUERY, AJAX), así como también he dotado a la web de todo el contenido JavaScript necesario. En este apartado también se encuentra la tarea de realizar un módulo de búsqueda y un módulo para mostrar las actas ya acabadas. El de búsqueda contiene un formulario con un campo de búsqueda y busca las coincidencias dentro de todos los ficheros que se han generado con la aplicación. También muestra un link para abrir ese fichero desde el navegador. Como aportación adicional también me he encargado de la configuración y generación de las tablas necesarias de la base de datos para el funcionamiento de la aplicación.
Resumo:
The wide use of e-technologies represents a great opportunity for underserved segments of the population, especially with the aim of reintegrating excluded individuals back into society through education. This is particularly true for people with different types of disabilities who may have difficulties while attending traditional on-site learning programs that are typically based on printed learning resources. The creation and provision of accessible e-learning contents may therefore become a key factor in enabling people with different access needs to enjoy quality learning experiences and services. Another e-learning challenge is represented by m-learning (which stands for mobile learning), which is emerging as a consequence of mobile terminals diffusion and provides the opportunity to browse didactical materials everywhere, outside places that are traditionally devoted to education. Both such situations share the need to access materials in limited conditions and collide with the growing use of rich media in didactical contents, which are designed to be enjoyed without any restriction. Nowadays, Web-based teaching makes great use of multimedia technologies, ranging from Flash animations to prerecorded video-lectures. Rich media in e-learning can offer significant potential in enhancing the learning environment, through helping to increase access to education, enhance the learning experience and support multiple learning styles. Moreover, they can often be used to improve the structure of Web-based courses. These highly variegated and structured contents may significantly improve the quality and the effectiveness of educational activities for learners. For example, rich media contents allow us to describe complex concepts and process flows. Audio and video elements may be utilized to add a “human touch” to distance-learning courses. Finally, real lectures may be recorded and distributed to integrate or enrich on line materials. A confirmation of the advantages of these approaches can be seen in the exponential growth of video-lecture availability on the net, due to the ease of recording and delivering activities which take place in a traditional classroom. Furthermore, the wide use of assistive technologies for learners with disabilities injects new life into e-learning systems. E-learning allows distance and flexible educational activities, thus helping disabled learners to access resources which would otherwise present significant barriers for them. For instance, students with visual impairments have difficulties in reading traditional visual materials, deaf learners have trouble in following traditional (spoken) lectures, people with motion disabilities have problems in attending on-site programs. As already mentioned, the use of wireless technologies and pervasive computing may really enhance the educational learner experience by offering mobile e-learning services that can be accessed by handheld devices. This new paradigm of educational content distribution maximizes the benefits for learners since it enables users to overcome constraints imposed by the surrounding environment. While certainly helpful for users without disabilities, we believe that the use of newmobile technologies may also become a fundamental tool for impaired learners, since it frees them from sitting in front of a PC. In this way, educational activities can be enjoyed by all the users, without hindrance, thus increasing the social inclusion of non-typical learners. While the provision of fully accessible and portable video-lectures may be extremely useful for students, it is widely recognized that structuring and managing rich media contents for mobile learning services are complex and expensive tasks. Indeed, major difficulties originate from the basic need to provide a textual equivalent for each media resource composing a rich media Learning Object (LO). Moreover, tests need to be carried out to establish whether a given LO is fully accessible to all kinds of learners. Unfortunately, both these tasks are truly time-consuming processes, depending on the type of contents the teacher is writing and on the authoring tool he/she is using. Due to these difficulties, online LOs are often distributed as partially accessible or totally inaccessible content. Bearing this in mind, this thesis aims to discuss the key issues of a system we have developed to deliver accessible, customized or nomadic learning experiences to learners with different access needs and skills. To reduce the risk of excluding users with particular access capabilities, our system exploits Learning Objects (LOs) which are dynamically adapted and transcoded based on the specific needs of non-typical users and on the barriers that they can encounter in the environment. The basic idea is to dynamically adapt contents, by selecting them from a set of media resources packaged in SCORM-compliant LOs and stored in a self-adapting format. The system schedules and orchestrates a set of transcoding processes based on specific learner needs, so as to produce a customized LO that can be fully enjoyed by any (impaired or mobile) student.
Resumo:
Except the article forming the main content most HTML documents on the WWW contain additional contents such as navigation menus, design elements or commercial banners. In the context of several applications it is necessary to draw the distinction between main and additional content automatically. Content extraction and template detection are the two approaches to solve this task. This thesis gives an extensive overview of existing algorithms from both areas. It contributes an objective way to measure and evaluate the performance of content extraction algorithms under different aspects. These evaluation measures allow to draw the first objective comparison of existing extraction solutions. The newly introduced content code blurring algorithm overcomes several drawbacks of previous approaches and proves to be the best content extraction algorithm at the moment. An analysis of methods to cluster web documents according to their underlying templates is the third major contribution of this thesis. In combination with a localised crawling process this clustering analysis can be used to automatically create sets of training documents for template detection algorithms. As the whole process can be automated it allows to perform template detection on a single document, thereby combining the advantages of single and multi document algorithms.
Resumo:
In this thesis, the author presents a query language for an RDF (Resource Description Framework) database and discusses its applications in the context of the HELM project (the Hypertextual Electronic Library of Mathematics). This language aims at meeting the main requirements coming from the RDF community. in particular it includes: a human readable textual syntax and a machine-processable XML (Extensible Markup Language) syntax both for queries and for query results, a rigorously exposed formal semantics, a graph-oriented RDF data access model capable of exploring an entire RDF graph (including both RDF Models and RDF Schemata), a full set of Boolean operators to compose the query constraints, fully customizable and highly structured query results having a 4-dimensional geometry, some constructions taken from ordinary programming languages that simplify the formulation of complex queries. The HELM project aims at integrating the modern tools for the automation of formal reasoning with the most recent electronic publishing technologies, in order create and maintain a hypertextual, distributed virtual library of formal mathematical knowledge. In the spirit of the Semantic Web, the documents of this library include RDF metadata describing their structure and content in a machine-understandable form. Using the author's query engine, HELM exploits this information to implement some functionalities allowing the interactive and automatic retrieval of documents on the basis of content-aware requests that take into account the mathematical nature of these documents.
Resumo:
Ein Tag ohne Internet ist für viele kaum vorstellbar. Das Spektrum der Internetnutzer ist breiter geworden und damit sind die Ansprüche an die Websites massiv angestiegen. Die Entscheidung auf einer Website zu bleiben oder auf einer anderen zu suchen fällt innerhalb von wenigen Sekunden. Diese Entscheidung ist sowohl vom Website-Design als auch von dem dargestellten Inhalt abhängig. Die Auswertung, wie schnell der Benutzer Online-Informationen finden und wie einfach er sie verstehen kann, ist die Aufgabe von Web-Usability-Testing. Für das Finden und Verstehen von Informationen sind die computertechnischen zusammen mit den linguistischen Aspekten zuständig. In der Usability-Forschung liegt jedoch der Fokus bislang weitgehend auf der Bewertung der computer¬linguistischen und ästhetischen Aspekte der Websites. In den Hintergrund gedrängt wurden dabei die linguistischen Aspekte. Im Vergleich sind diese weniger systematisch erforscht und in Usability-Richtlinien kaum zu finden. Stattdessen stößt man überwiegend auf allgemeine Empfehlungen. Motiviert davon hat die vorliegende Arbeit das Ziel, Die Web-Usability systematisch sowohl aus linguistischer als auch aus formaler Sicht zu erforschen. Auf linguistischer Ebene wurde in Anlehnung an die Zeichentheorie von Morris die Web-Usability analysiert und der Begriff Linguistische Web-Usability eingeführt. Auf Basis dieser Analyse sowie einer literaturstudie ‘literature review’ mehrerer Usability-Richtlinien wurde ein Kriterienkatalog entwickelt. Bei der Umsetzung dieses Kriterienkatalogs im Rahmen einer Usability-Studie wurde die Website der Universität Johannes Gutenberg-Universität Mainz (JGU) im Usability-Labor unter Anwendung der Methode Eye-Tracking zusammen mit der Think-Aloud-Methode und der Retrospective-Think-Aloud-Methode getestet. Die empirischen Ergebnisse zeigen, dass die linguistischen Usability-Probleme genau wie die formalen die Benutzer hindern, die gesuchten Informationen zu finden, oder zumindest ihre Suche verlangsamen. Dementsprechend sollten die linguistischen Perspektiven in die Usability-Richtlinien miteinbezogen werden.
Resumo:
The present study aims to investigate the implications of web-based delivery of identical learning content for time efficiency and students' performance, as compared to conventional textbook resources.
Resumo:
State standardized testing has always been a tool to measure a school’s performance and to help evaluate school curriculum. However, with the school of choice legislation in 1992, the MEAP test became a measuring stick to grade schools by and a major tool in attracting school of choice students. Now, declining enrollment and a state budget struggling to stay out of the red have made school of choice students more important than ever before. MEAP scores have become the deciding factor in some cases. For the past five years, the Hancock Middle School staff has been working hard to improve their students’ MEAP scores in accordance with President Bush's “No Child Left Behind” legislation. In 2005, the school was awarded a grant that enabled staff to work for two years on writing and working towards school goals that were based on the improvement of MEAP scores in writing and math. As part of this effort, the school purchased an internet-based program geared at giving students practice on state content standards. This study examined the results of efforts by Hancock Middle School to help improve student scores in mathematics on the MEAP test through the use of an online program called “Study Island.” In the past, the program was used to remediate students, and as a review with an incentive at the end of the year for students completing a certain number of objectives. It had also been used as a review before upcoming MEAP testing in the fall. All of these methods may have helped a few students perform at an increased level on their standardized test, but the question remained of whether a sustained use of the program in a classroom setting would increase an understanding of concepts and performance on the MEAP for the masses. This study addressed this question. Student MEAP scores and Study Island data from experimental and comparison groups of students were compared to understand how a sustained use of Study Island in the classroom would impact student test scores on the MEAP. In addition, these data were analyzed to determine whether Study Island results provide a good indicator of students’ MEAP performance. The results of the study suggest that there were limited benefits related to sustained use of Study Island and gave some indications about the effectiveness of the mathematics curriculum at Hancock Middle School. These results and implications for instruction are discussed.
Resumo:
Das Web 2.0 eröffnet Wissenschaftlerinnen und Wissenschaftlern neue Möglichkeiten mit Wissen und Informationen umzugehen: Das Recherchieren von Informationen und Quellen, der Austausch von Wissen mit anderen, das Verwalten von Ressourcen und das Erstellen von eigenen Inhalten im Web ist einfach und kostengünstig möglich. Dieser Artikel thematisiert die Bedeutung des Web 2.0 für den Umgang mit Wissen und Informationen und zeigt auf, wie durch die Kooperation vieler Einzelner das Schaffen von neuem Wissen und von Innovationen möglich wird. Diskutiert werden der Einfluss des Web 2.0 auf die Wissenschaft und mögliche Vor- und Nachteile der Nutzung. Außerdem wird ein kurzer Überblick über Studien gegeben, die die Nutzung des Web 2.0 in der Gesamtbevölkerung untersuchen. Im empirischen Teil des Artikels werden Methode und Ergebnisse der Befragungsstudie „Wissenschaftliches Arbeiten im Web 2.0“ vorgestellt. Befragt wurden Nachwuchswissenschaftlerinnen und Nachwuchswissenschaftler in Deutschland zur Nutzung des Web 2.0 für die eigene wissenschaftliche Arbeit. Dabei zeigt sich, dass insbesondere die Wikipedia von einem Großteil der Befragten intensiv bis sehr intensiv für den Einstieg in die Recherche verwendet wird. Die aktive Nutzung des Web 2.0, z.B. durch das Schreiben eines eigenen Blogs oder dem Mitarbeiten bei der Online-Enzyklopädie Wikipedia ist bis jetzt noch gering. Viele Dienste sind unbekannt oder werden eher skeptisch beurteilt, der lokale Desktopcomputer wurde noch nicht vom Web als zentraler Speicherort abgelöst.
Resumo:
In Part 1 of this article we discussed the need for information quality and the systematic management of learning materials and learning arrangements. Digital repositories, often called Learning Object Repositories (LOR), were introduced as a promising answer to this challenge. We also derived technological and pedagogical requirements for LORs from a concretization of information quality criteria for e-learning technology. This second part presents technical solutions that particularly address the demands of open education movements, which aspire to a global reuse and sharing culture. From this viewpoint, we develop core requirements for scalable network architectures for educational content management. We then present edu-sharing, an advanced example of a network of homogeneous repositories for learning resources, and discuss related technology. We conclude with an outlook in terms of emerging developments towards open and networked system architectures in e-learning.
Resumo:
This study assessed the perceptions of college students regarding the instructional quality of online and web based courses via a content management system. [See PDF for complete abstract]
Resumo:
BACKGROUND: Many users search the Internet for answers to health questions. Complementary and alternative medicine (CAM) is a particularly common search topic. Because many CAM therapies do not require a clinician's prescription, false or misleading CAM information may be more dangerous than information about traditional therapies. Many quality criteria have been suggested to filter out potentially harmful online health information. However, assessing the accuracy of CAM information is uniquely challenging since CAM is generally not supported by conventional literature. OBJECTIVE: The purpose of this study is to determine whether domain-independent technical quality criteria can identify potentially harmful online CAM content. METHODS: We analyzed 150 Web sites retrieved from a search for the three most popular herbs: ginseng, ginkgo and St. John's wort and their purported uses on the ten most commonly used search engines. The presence of technical quality criteria as well as potentially harmful statements (commissions) and vital information that should have been mentioned (omissions) was recorded. RESULTS: Thirty-eight sites (25%) contained statements that could lead to direct physical harm if acted upon. One hundred forty five sites (97%) had omitted information. We found no relationship between technical quality criteria and potentially harmful information. CONCLUSIONS: Current technical quality criteria do not identify potentially harmful CAM information online. Consumers should be warned to use other means of validation or to trust only known sites. Quality criteria that consider the uniqueness of CAM must be developed and validated.
Resumo:
OBJECTIVES: To determine the characteristics of popular breast cancer related websites and whether more popular sites are of higher quality. DESIGN: The search engine Google was used to generate a list of websites about breast cancer. Google ranks search results by measures of link popularity---the number of links to a site from other sites. The top 200 sites returned in response to the query "breast cancer" were divided into "more popular" and "less popular" subgroups by three different measures of link popularity: Google rank and number of links reported independently by Google and by AltaVista (another search engine). MAIN OUTCOME MEASURES: Type and quality of content. RESULTS: More popular sites according to Google rank were more likely than less popular ones to contain information on ongoing clinical trials (27% v 12%, P=0.01 ), results of trials (12% v 3%, P=0.02), and opportunities for psychosocial adjustment (48% v 23%, P<0.01). These characteristics were also associated with higher number of links as reported by Google and AltaVista. More popular sites by number of linking sites were also more likely to provide updates on other breast cancer research, information on legislation and advocacy, and a message board service. Measures of quality such as display of authorship, attribution or references, currency of information, and disclosure did not differ between groups. CONCLUSIONS: Popularity of websites is associated with type rather than quality of content. Sites that include content correlated with popularity may best meet the public's desire for information about breast cancer.
Resumo:
This chapter presents fuzzy cognitive maps (FCM) as a vehicle for Web knowledge aggregation, representation, and reasoning. The corresponding Web KnowARR framework incorporates findings from fuzzy logic. To this end, a first emphasis is particularly on the Web KnowARR framework along with a stakeholder management use case to illustrate the framework’s usefulness as a second focal point. This management form is to help projects to acceptance and assertiveness where claims for company decisions are actively involved in the management process. Stakeholder maps visually (re-) present these claims. On one hand, they resort to non-public content and on the other they resort to content that is available to the public (mostly on the Web). The Semantic Web offers opportunities not only to present public content descriptively but also to show relationships. The proposed framework can serve as the basis for the public content of stakeholder maps.