884 resultados para web content


Relevância:

30.00% 30.00%

Publicador:

Resumo:

"La nature flexible et interactive d’Internet implique que de plus en plus de consommateurs l’utilisent en tant qu’outil d’information sur tous les sujets imaginables, qu’il s’agisse de trouver la meilleurs aubaine sur un billet d’avion, ou de se renseigner sur certains problèmes liés à leur santé. Cependant, une grande quantité de l’information disponible en ligne n’est pas impartiale. Certains sites web ne présentent qu’une vision des choses ou font la promotion des produits d’une seule entreprise. Les consommateurs sont plus habitués à évaluer le poid à attribuer à certains conseils ou autres formes d’informations dans un contexte différent. Une telle évaluation de la crédibilité d’une information devient plus difficile dans le monde virtuel où les indices du monde réel, de l’écrit ou de l’interaction face-à-face sont absents. Consumers International a développé une définition de la notion de « crédibilité de l’information retrouvée en ligne » et a établi une liste de critères pouvant l’évaluer. Entre les mois d’avril et juillet 2002, une équipe représentant 13 pays a visité 460 sites web destinés à fournir de l’information sur la santé, sur des produits financiers et sur les prix de voyages et de différents biens de consommation en utilisant ces critères. L’appréciation de ces données nous démontre que les consommateurs doivent faire preuve de prudence lorsqu’ils utilisent Internet comme source d’information. Afin de faire des choix éclairés sur la valeur à donner à une information retrouvée en ligne, les consommateurs doivent connaître ce qui suit : L’identité du propriétaire d’un site web, ses partenaires et publicitaires; La qualité de l’information fournie, incluant son actualité et sa clarté, l’identité des sources citées et l’autorité des individus donnant leur opinion; Si le contenu du site est contrôlé par des intérêts commerciaux, ou, s’il offre des liens, la nature de la relation avec le site visé; et Si on lui demandera de fournir des données personnelles, l’usage réservé à ces données et les mesures employées pour protéger ces données. Cette étude démontre que plusieurs sites ne fournissent pas suffisamment de détails dans ces domaines, ce qui risque d’exposer le consommateur à des informations inexactes, incomplètes ou même délibérément fausses. Les discours exagérés ou vagues de certains sites concernant leurs services ne fait qu’ajouter à la confusion. Ceci peut résulter en une perte de temps ou d’argent pour le consommateur, mais pour certaines catégories d’informations, comme les conseils visant la santé, le fait de se fier à de mauvais conseils peut avoir des conséquences beaucoup plus graves. Cette étude vise à aviser les consommateurs de toujours vérifier le contexte des sites avant de se fier aux informations qui s’y retrouvent. Elle demande aux entreprises d’adopter de meilleures pratiques commerciales et de fournir une information plus transparente afin d’accroître la confiance des consommateurs. Elle demande finalement aux gouvernements de contribuer à ce mouvement en assurant le respect des lois relatives à la consommation et des standards existants tant dans le monde réel que dans le monde virtuel."

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Site web associé au mémoire: http://daou.st/JSreal

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The wealth of information available freely on the web and medical image databases poses a major problem for the end users: how to find the information needed? Content –Based Image Retrieval is the obvious solution.A standard called MPEG-7 was evolved to address the interoperability issues of content-based search.The work presented in this thesis mainly concentrates on developing new shape descriptors and a framework for content – based retrieval of scoliosis images.New region-based and contour based shape descriptor is developed based on orthogonal Legendre polymomials.A novel system for indexing and retrieval of digital spine radiographs with scoliosis is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The semantic web represents a current research effort to increase the capability of machines to make sense of content on the web. In this class, Peter Scheir will give a guest lecture on the basic principles underlying the semantic web vision, including RDF, OWL and other standards.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sivercultur@ es un espacio web 2.0 que fortalece las iniciativas artísticas y culturales de la oferta de la ciudad, brinda herramientas de divulgación y circulación de información y el usuario puede ser consumidor y generador de contenidos a través del uso estratégico de TICs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lecture on IA and webdesign (1 & 2 of 3). Web 2 talk I reference Aral Balkan - talk video is included here from his presentation at the Norwegian developers conference.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

From its inception as a global hypertext system, the Web has evolved into a universal platform for deploying loosely coupled distributed applications. 2^W is a result of the exponentially growing Web building on itself to move from a Web of content to a Web of applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Getting content from server to client can be more complicated than we have discussed so far. This lecture discusses how caching and content delivery networks help to make the Web work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract 1: Social Networks such as Twitter are often used for disseminating and collecting information during natural disasters. The potential for its use in Disaster Management has been acknowledged. However, more nuanced understanding of the communications that take place on social networks are required to more effectively integrate this information into the processes within disaster management. The type and value of information shared should be assessed, determining the benefits and issues, with credibility and reliability as known concerns. Mapping the tweets in relation to the modelled stages of a disaster can be a useful evaluation for determining the benefits/drawbacks of using data from social networks, such as Twitter, in disaster management.A thematic analysis of tweets’ content, language and tone during the UK Storms and Floods 2013/14 was conducted. Manual scripting was used to determine the official sequence of events, and classify the stages of the disaster into the phases of the Disaster Management Lifecycle, to produce a timeline. Twenty- five topics discussed on Twitter emerged, and three key types of tweets, based on the language and tone, were identified. The timeline represents the events of the disaster, according to the Met Office reports, classed into B. Faulkner’s Disaster Management Lifecycle framework. Context is provided when observing the analysed tweets against the timeline. This illustrates a potential basis and benefit for mapping tweets into the Disaster Management Lifecycle phases. Comparing the number of tweets submitted in each month with the timeline, suggests users tweet more as an event heightens and persists. Furthermore, users generally express greater emotion and urgency in their tweets.This paper concludes that the thematic analysis of content on social networks, such as Twitter, can be useful in gaining additional perspectives for disaster management. It demonstrates that mapping tweets into the phases of a Disaster Management Lifecycle model can have benefits in the recovery phase, not just in the response phase, to potentially improve future policies and activities. Abstract2: The current execution of privacy policies, as a mode of communicating information to users, is unsatisfactory. Social networking sites (SNS) exemplify this issue, attracting growing concerns regarding their use of personal data and its effect on user privacy. This demonstrates the need for more informative policies. However, SNS lack the incentives required to improve policies, which is exacerbated by the difficulties of creating a policy that is both concise and compliant. Standardization addresses many of these issues, providing benefits for users and SNS, although it is only possible if policies share attributes which can be standardized. This investigation used thematic analysis and cross- document structure theory, to assess the similarity of attributes between the privacy policies (as available in August 2014), of the six most frequently visited SNS globally. Using the Jaccard similarity coefficient, two types of attribute were measured; the clauses used by SNS and the coverage of forty recommendations made by the UK Information Commissioner’s Office. Analysis showed that whilst similarity in the clauses used was low, similarity in the recommendations covered was high, indicating that SNS use different clauses, but to convey similar information. The analysis also showed that low similarity in the clauses was largely due to differences in semantics, elaboration and functionality between SNS. Therefore, this paper proposes that the policies of SNS already share attributes, indicating the feasibility of standardization and five recommendations are made to begin facilitating this, based on the findings of the investigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El proyecto de creación de una Aplicación web sobre los diferentes perfiles de mercado de las provincias canadienses, consiste en la recopilación de información de diversas fuentes académicas, con el fin de crear una guía electrónica que permita a los exportadores colombianos tener información precisa y actualizada sobre cada una de las provincias canadienses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Se trata de habilitar las mejores prácticas de enseñanza a través de las tecnologías de la información para posibilitar un aprendizaje más efectivo para los alumnos, así como proporcionar un sencillo y flexible acceso a todas las potencialidades para los profesores, al mismo tiempo que se maximiza la eficiencia y adecuación de su implementación usando las tecnologías de la información. Se define una teoría de modelado del e-learning, que por un lado aporte una visión global de modelado del e-learning; y por otro, modele de forma completa desde diferentes puntos de vista aspectos para los que se detectan carencias Otro objetivo es crear un nuevo sistema de pistas adaptativo en tutoría inteligente utilizando técnicas de Web semántica, donde se apliquen varios de los diferentes aspectos de la teoría. En esta tesis se proporciona una teoría de modelado del e-learning, que incluye una visión global sobre qué modelar y cómo hacerlo, las interrelaciones entre diferentes conceptos y elementos, una visión ideal sobre el e-learning, una propuesta de proceso de desarrollo en ciclo de vida, y un plan general de evaluación de los diferentes aspectos involucrados. Además, como parte de esta teoría, se han analizado las relaciones entre las funcionalidades de sistemas de gestión del aprendizaje y los estándares de e-learning actuales, se ha definido un nuevo modelo que extiende UML y otro basado en la especificación IMS-CP (Content Packaging) para el modelado de cursos completos en sistemas de gestión del aprendizaje, se ha contribuido en varias herramientas de autor que pueden verse como modelados en lenguaje natural de diferentes aspectos del e-learning de forma que sean sencillos de utilizar por profesores sin grandes conocimientos tecnológicos, y se ha creado una nueva teoría de reglas de adaptación personalizadas que son atómicas, reusables, intercambiables, e interoperables. Se ha definido una nueva especificación de pistas para el aprendizaje basado en problemas, que recopila funcionalidades de otros sistemas del estado del arte, pero también incluye nuevas funcionalidades basadas en ideas propias, dando una justificación pedagógica de cada aspecto. Se ha establecido un mapeo a XML, y otra representación a UML. Así mismo, se ha diseñado una herramienta de autor que permite a profesores sin grandes conocimientos tecnológicos crear los ejercicios con pistas de acuerdo con la especificación. Para poner en práctica este modelo de pistas, se ha implementado un módulo reproductor de pistas programado en python como una extesión al tutor inteligente XTutor. Este reproductor permite desplegar ejercicios con pistas que cubren los casos de la nueva especificación definida y que quedan disponibles vía Web para su uso por parte de los alumnos. También se ha diseñado una herramienta de competición innovadora para aprovechar la motivación junto con el aprendizaje basado en problemas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When publishing information on the web, one expects it to reach all the people that could be interested in. This is mainly achieved with general purpose indexing and search engines like Google which is the most used today. In the particular case of geographic information (GI) domain, exposing content to mainstream search engines is a complex task that needs specific actions. In many occasions it is convenient to provide a web site with a specially tailored search engine. Such is the case for on-line dictionaries (wikipedia, wordreference), stores (amazon, ebay), and generally all those holding thematic databases. Due to proliferation of these engines, A9.com proposed a standard interface called OpenSearch, used by modern web browsers to manage custom search engines. Geographic information can also benefit from the use of specific search engines. We can distinguish between two main approaches in GI retrieval information efforts: Classical OGC standardization on one hand (CSW, WFS filters), which are very complex for the mainstream user, and on the other hand the neogeographer’s approach, usually in the form of specific APIs lacking a common query interface and standard geographic formats. A draft ‘geo’ extension for OpenSearch has been proposed. It adds geographic filtering for queries and recommends a set of simple standard response geographic formats, such as KML, Atom and GeoRSS. This proposal enables standardization while keeping simplicity, thus covering a wide range of use cases, in both OGC and the neogeography paradigms. In this article we will analyze the OpenSearch geo extension in detail and its use cases, demonstrating its applicability to both the SDI and the geoweb. Open source implementations will be presented as well

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Results are presented from a new web application called OceanDIVA - Ocean Data Intercomparison and Visualization Application. This tool reads hydrographic profiles and ocean model output and presents the data on either depth levels or isotherms for viewing in Google Earth, or as probability density functions (PDFs) of regional model-data misfits. As part of the CLIVAR Global Synthesis and Observations Panel, an intercomparison of water mass properties of various ocean syntheses has been undertaken using OceanDIVA. Analysis of model-data misfits reveals significant differences between the water mass properties of the syntheses, such as the ability to capture mode water properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The construction industry has incurred a considerable amount of waste as a result of poor logistics supply chain network management. Therefore, managing logistics in the construction industry is critical. An effective logistic system ensures delivery of the right products and services to the right players at the right time while minimising costs and rewarding all sectors based on value added to the supply chain. This paper reports on an on-going research study on the concept of context-aware services delivery in the construction project supply chain logistics. As part of the emerging wireless technologies, an Intelligent Wireless Web (IWW) using context-aware computing capability represents the next generation ICT application to construction-logistics management. This intelligent system has the potential of serving and improving the construction logistics through access to context-specific data, information and services. Existing mobile communication deployments in the construction industry rely on static modes of information delivery and do not take into account the worker’s changing context and dynamic project conditions. The major problems in these applications are lack of context-specificity in the distribution of information, services and other project resources, and lack of cohesion with the existing desktop based ICT infrastructure. The research works focus on identifying the context dimension such as user context, environmental context and project context, selection of technologies to capture context-parameters such wireless sensors and RFID, selection of supporting technologies such as wireless communication, Semantic Web, Web Services, agents, etc. The process of integration of Context-Aware Computing and Web-Services to facilitate the creation of intelligent collaboration environment for managing construction logistics will take into account all the necessary critical parameters such as storage, transportation, distribution, assembly, etc. within off and on-site project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are still major challenges in the area of automatic indexing and retrieval of digital data. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. Research has been ongoing for a few years in the field of ontological engineering with the aim of using ontologies to add knowledge to information. In this paper we describe the architecture of a system designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval.