832 resultados para Web content adaptation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

La Facultad de Ciencias Sociales y de la Comunicación a través del Departamento de Periodismo II viene organizando desde hace cinco años el Congreso Internacional de Ciberperiodismo y Web 2.0, un evento relacionado con el Periodismo e Internet, en general y con la Web 2.0, en particular. Un concepto éste, el de la Web 2.0, en el que el verdadero protagonismo recae en las audiencias. El público se está convirtiendo en el editor de información; es él el que define cómo quiere ver la información; y está constituyendo comunidades en este proceso. La Web 2.0 refuerza la idea del usuario como creador y no sólo como consumidor de medios. Aquellas personas que antes eran clientes de información se convierten paulatinamente en editores, y muchas de las aplicaciones asociadas con la web 2.0 pretenden ayudarles a organizar y publicar sus contenidos. El Congreso de este año, que se celebra los días 17 y 18 de noviembre en el Bizkaia Aretoa lleva por título "¿Son las audiencias indicadores de calidad?". La edición de este año del Congreso intentará responder acerca de cuáles son las estrategias de los medios de comunicación considerados de referencia, están adoptando ante el hecho de que las audiencias demanden más participación y, como consecuencia, estén cada vez más aceptando contenidos generados por los usuarios (User-Generated Content). Se explorarán características, herramientas, impacto y consecuencias para comprender, desde un punto de vista crítico, la naturaleza o el alcance de estos nuevos modelos. El objetivo es nuevamente reunir a especialistas en el área para analizar y debatir cuestiones centradas en la práctica del Ciberperiodismo actual a la luz de las nuevas realidades empresariales, profesionales y de formación. Los desafíos y los cambios provocados por la convergencia y la multitextualidad, por el también llamado “Periodismo ciudadano”, por las innovaciones tecnológicas y las experiencias emprendedoras en esta área serán temas a destacar. Se pretende, igualmente, que el congreso constituya un momento ideal para la actualización de conocimientos científicos sobre el Ciberperiodismo. Para ello, se cuenta con la presencia de académicos, tanto nacionales como extranjeros, que constituyen un referente en la investigación.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Two distinct trends are emerging with respect to how data is shared, collected, and analyzed within the bioinformatics community. First, Linked Data, exposed as SPARQL endpoints, promises to make data easier to collect and integrate by moving towards the harmonization of data syntax, descriptive vocabularies, and identifiers, as well as providing a standardized mechanism for data access. Second, Web Services, often linked together into workflows, normalize data access and create transparent, reproducible scientific methodologies that can, in principle, be re-used and customized to suit new scientific questions. Constructing queries that traverse semantically-rich Linked Data requires substantial expertise, yet traditional RESTful or SOAP Web Services cannot adequately describe the content of a SPARQL endpoint. We propose that content-driven Semantic Web Services can enable facile discovery of Linked Data, independent of their location. Results: We use a well-curated Linked Dataset - OpenLifeData - and utilize its descriptive metadata to automatically configure a series of more than 22,000 Semantic Web Services that expose all of its content via the SADI set of design principles. The OpenLifeData SADI services are discoverable via queries to the SHARE registry and easy to integrate into new or existing bioinformatics workflows and analytical pipelines. We demonstrate the utility of this system through comparison of Web Service-mediated data access with traditional SPARQL, and note that this approach not only simplifies data retrieval, but simultaneously provides protection against resource-intensive queries. Conclusions: We show, through a variety of different clients and examples of varying complexity, that data from the myriad OpenLifeData can be recovered without any need for prior-knowledge of the content or structure of the SPARQL endpoints. We also demonstrate that, via clients such as SHARE, the complexity of federated SPARQL queries is dramatically reduced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[ES]Este proyecto trata el problema del bloqueo de anuncios que existe actualmente en la web. Teniendo en cuenta la opinión de diversos autores sobre el tema, se analiza la problemática del uso de bloqueadores de anuncios. Además, se estudia cómo funcionan estos bloqueadores. En concreto, se analiza cómo está construido y cómo funciona, mediante ingeniería inversa, la extensión más popular en este campo, es decir, AdBlock Plus. Aparte de esto, se proponen una serie de soluciones para atacar al problema. Por último, se desarrolla e implementa una de las propuestas. Como resultado, se mejora el funcionamiento de AdBlock Plus, en el sentido de que da más libertad al usuario para elegir lo que bloquea dando la oportunidad a anunciantes y proveedores de contenido de mantener su modelo de negocio basado en la publicidad.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este trabalho tem por objetivo propor um modelo de ontologia simples e generalista, capaz de descrever os conceitos mais básicos que permeiam o domínio de conhecimento dos jornais on-line brasileiros não especializados, fundamentado tanto na prática quanto conceitualmente, em conformidade com os princípios da Web Semântica. A partir de uma nova forma de classificação e organização do conteúdo, a ontologia proposta deve ter condições de atender as necessidades comuns de ambas as partes, jornal e leitor, que são, resumidamente, a busca e a recuperação das informações.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En pleno siglo XXI, el uso de Internet y los avances no sólo afectan a las personas sino que las empresas también deben evolucionar al mismo ritmo y adaptar todas sus prácticas a dichos avances. Con la aparición de la Web 2.0, ciertos aspectos de las empresas han quedado obsoletos y se han debido adaptar a la nueva era: la era de la comunicación e de la interacción a través de Internet. Se han creado nuevos modelos de negocio, se han mejorado actividades de la cadena de valor, han surgido nuevas estrategias de marketing y comunicación corporativa y se han creado unos nuevos canales de venta, alrededor del fenómeno e-Commerce. En cuanto a los trabajadores, las empresas han comenzado a valorar nuevas competencias relacionadas con el uso de Internet y la Web 2.0. Dichas competencias pueden ser comunes para muchos puestos de trabajo, por ejemplo el uso de redes sociales o la gestión de la información, otras son más específicas y dependen del puesto de trabajo que consideremos. Finalmente, la aparición de la Web 2.0 ha exigido a las empresas a crear nuevas áreas y puestos de trabajo o modificar los actuales para adecuarse a los nuevos tiempos y tendencias. Así surgen los diferentes perfiles profesionales de las áreas de Estrategia Digital, Marketing Digital, Contenido Digital, Social Media, Análisis Big Data, e-Commerce y Mobile Marketing. Estos perfiles gozan de mucha popularidad y demanda por parte de las empresas y se estima que va a crecer aún más el número de puestos relacionados con el ámbito digital, ya que son las profesiones del futuro.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

No campo da educação permanente na área da saúde, podem ser citadas diferentes iniciativas que visam formar profissionais com o uso das Tecnologias de Informação e Comunicação (TICs). No entanto, pouco se sabe ainda sobre o uso da web por profissionais da saúde como estratégia da aprendizagem formal, menos ainda quando se aborda a aprendizagem informal. Percebe-se que as ações no campo da educação com uso e, sobretudo, para o uso da tecnologia como ferramenta de aprendizagem ainda são feitas de forma muito intuitivas, por acerto e erro, tendo em vista a própria evolução da tecnologia em um curto período temporal. Sendo assim, o objetivo geral da pesquisa é compreender o perfil, as percepções e representações sociais sobre aprendizagem na web de médicos, enfermeiros e cirurgiões-dentistas e uma possível influência desse uso no cotidiano profissional. Para atingir o objetivo delimitado, foi empregada a metodologia quali-quantitativa através da utilização de um questionário on-line, contendo questões fechadas e questões abertas, respondido por 277 alunos do Curso de Especialização em Saúde da Família oferecido pelo núcleo da Universidade do Estado do Rio de Janeiro (UERJ) da Universidade Aberta do Sistema Único de Saúde (UNA-SUS). Para análise das questões fechadas, foi utilizada a estatística descritiva e testes bivariados não paramétricos. A análise das questões abertas foi feita à luz da teoria da representações sociais com emprego da técnica da análise do conteúdo e das evocações livres. Os resultados da pesquisa foram apresentados em formato de três trabalhos para apresentação em eventos e quatro artigos submetidos para publicação em revistas de alta qualidade acadêmica. Com base nos resultados, destaca-se como preocupação que o simples consumo de informações esteja justificando e a ele esteja restrito o uso da internet para os sujeitos, em detrimento às possibilidades educacionais da cibercultura. Acredita-se ser necessário o desenvolvimento de ações que subsidiem uma prática mais reflexiva a fim de reverter um possível uso reduzido das potencialidades da TICs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Architecture, Engineering, Construction and Facilities Management (AEC/FM) industry is rapidly becoming a multidisciplinary, multinational and multi-billion dollar economy, involving large numbers of actors working concurrently at different locations and using heterogeneous software and hardware technologies. Since the beginning of the last decade, a great deal of effort has been spent within the field of construction IT in order to integrate data and information from most computer tools used to carry out engineering projects. For this purpose, a number of integration models have been developed, like web-centric systems and construction project modeling, a useful approach in representing construction projects and integrating data from various civil engineering applications. In the modern, distributed and dynamic construction environment it is important to retrieve and exchange information from different sources and in different data formats in order to improve the processes supported by these systems. Previous research demonstrated that a major hurdle in AEC/FM data integration in such systems is caused by its variety of data types and that a significant part of the data is stored in semi-structured or unstructured formats. Therefore, new integrative approaches are needed to handle non-structured data types like images and text files. This research is focused on the integration of construction site images. These images are a significant part of the construction documentation with thousands stored in site photographs logs of large scale projects. However, locating and identifying such data needed for the important decision making processes is a very hard and time-consuming task, while so far, there are no automated methods for associating them with other related objects. Therefore, automated methods for the integration of construction images are important for construction information management. During this research, processes for retrieval, classification, and integration of construction images in AEC/FM model based systems have been explored. Specifically, a combination of techniques from the areas of image and video processing, computer vision, information retrieval, statistics and content-based image and video retrieval have been deployed in order to develop a methodology for the retrieval of related construction site image data from components of a project model. This method has been tested on available construction site images from a variety of sources like past and current building construction and transportation projects and is able to automatically classify, store, integrate and retrieve image data files in inter-organizational systems so as to allow their usage in project management related tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A 2-year investigation of growth and food availability of silver carp and bighead was carried out using stable isotope and gut content analysis in a large pen in Meiliang Bay of Lake Taihu, China. Both silver carp and bighead exhibited significantly higher delta 13C in 2005 than in 2004, which can probably be attributed to two factors: (i) the difference between isotopic compositions at the base of the pelagic food web and (ii) the difference between the compositions of prey items and stable isotopes. The significantly positive correlations between body length, body weight and stable isotope ratios indicated that isotopic changes in silver carp and bighead resulted from the accumulation of biomass concomitant with rapid growth. Because of the drastic decrease in zooplankton in the diet in 2005, silver carp and bighead grew faster in 2004 than in 2005. Bighead carp showed a lower trophic level than silver carp in 2005 as indicated by stable nitrogen isotope ratios, which was possibly explained by the interspecific difference between the prey species and the food quality of silver carp and bighead.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, accumulation and distribution of microcystins (MCs) was examined monthly in six species of fish with different trophic levels in Meiliang Bay, Lake Taihu, China, from June to November 2005, Microcystins were analyzed by liquid chromatography electrospray ionization mass spectrometry (LC-ESI-MS). Average recoveries of spiked fish samples were 67.7% for MC-RR, 85.3% for MC-YR, and 88.6% for MC-LR. The MCs (MC-RR+MC-YR+MC-LR) concentration in liver and gut content was highest in phytoplanktivorous fish, followed by omnivorous fish, and was lowest in carnivorous fish; while MCs concentration in muscle was highest in omnivorous fish, followed by phytoplanktivorous fish, and was lowest in carnivorous fish. This is the first study reporting MCs accumulation in the gonad of fish in field. The main uptake of MC-YR in fish seems to be through the gills from the dissolved MCs. The WHO limit for tolerable daily intake was exceeded only in common carp muscle. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose the development of a world wide web image search engine that crawls the web collecting information about the images it finds, computes the appropriate image decompositions and indices, and stores this extracted information for searches based on image content. Indexing and searching images need not require solving the image understanding problem. Instead, the general approach should be to provide an arsenal of image decompositions and discriminants that can be precomputed for images. At search time, users can select a weighted subset of these decompositions to be used for computing image similarity measurements. While this approach avoids the search-time-dependent problem of labeling what is important in images, it still holds several important problems that require further research in the area of query by image content. We briefly explore some of these problems as they pertain to shape.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With web caching and cache-related services like CDNs and edge services playing an increasingly significant role in the modern internet, the problem of the weak consistency and coherence provisions in current web protocols is becoming increasingly significant and drawing the attention of the standards community [LCD01]. Toward this end, we present definitions of consistency and coherence for web-like environments, that is, distributed client-server information systems where the semantics of interactions with resource are more general than the read/write operations found in memory hierarchies and distributed file systems. We then present a brief review of proposed mechanisms which strengthen the consistency of caches in the web, focusing upon their conceptual contributions and their weaknesses in real-world practice. These insights motivate a new mechanism, which we call "Basis Token Consistency" or BTC; when implemented at the server, this mechanism allows any client (independent of the presence and conformity of any intermediaries) to maintain a self-consistent view of the server's state. This is accomplished by annotating responses with additional per-resource application information which allows client caches to recognize the obsolescence of currently cached entities and identify responses from other caches which are already stale in light of what has already been seen. The mechanism requires no deviation from the existing client-server communication model, and does not require servers to maintain any additional per-client state. We discuss how our mechanism could be integrated into a fragment-assembling Content Management System (CMS), and present a simulation-driven performance comparison between the BTC algorithm and the use of the Time-To-Live (TTL) heuristic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic service aggregation techniques can exploit skewed access popularity patterns to reduce the costs of building interactive VoD systems. These schemes seek to cluster and merge users into single streams by bridging the temporal skew between them, thus improving server and network utilization. Rate adaptation and secondary content insertion are two such schemes. In this paper, we present and evaluate an optimal scheduling algorithm for inserting secondary content in this scenario. The algorithm runs in polynomial time, and is optimal with respect to the total bandwidth usage over the merging interval. We present constraints on content insertion which make the overall QoS of the delivered stream acceptable, and show how our algorithm can satisfy these constraints. We report simulation results which quantify the excellent gains due to content insertion. We discuss dynamic scenarios with user arrivals and interactions, and show that content insertion reduces the channel bandwidth requirement to almost half. We also discuss differentiated service techniques, such as N-VoD and premium no-advertisement service, and show how our algorithm can support these as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Web caching aims to reduce network traffic, server load, and user-perceived retrieval delays by replicating "popular" content on proxy caches that are strategically placed within the network. While key to effective cache utilization, popularity information (e.g. relative access frequencies of objects requested through a proxy) is seldom incorporated directly in cache replacement algorithms. Rather, other properties of the request stream (e.g. temporal locality and content size), which are easier to capture in an on-line fashion, are used to indirectly infer popularity information, and hence drive cache replacement policies. Recent studies suggest that the correlation between these secondary properties and popularity is weakening due in part to the prevalence of efficient client and proxy caches (which tend to mask these correlations). This trend points to the need for proxy cache replacement algorithms that directly capture and use popularity information. In this paper, we (1) present an on-line algorithm that effectively captures and maintains an accurate popularity profile of Web objects requested through a caching proxy, (2) propose a novel cache replacement policy that uses such information to generalize the well-known GreedyDual-Size algorithm, and (3) show the superiority of our proposed algorithm by comparing it to a host of recently-proposed and widely-used algorithms using extensive trace-driven simulations and a variety of performance metrics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chronic sustained hypoxia (CH) induces functional weakness, atrophy, and mitochondrial remodelling in the diaphragm muscle. Animal models of CH present with changes similar to patients with respiratory-related disease, thus, elucidating the molecular mechanisms driving these adaptations is clinically important. We hypothesize that ROS are pivotal in diaphragm muscle adaptation to CH. C57BL6/J mice were exposed to CH (FiO2=0.1) for one, three, and six weeks. Sternohyoid (upper airway dilator), extensor digitorum longus (EDL), and soleus were studied as reference muscles as well as the diaphragm. The diaphragm was profiled using a redox proteomics approach followed by mass spectrometry. Following this, redox-modified metabolic enzyme activities and atrophy signalling were assessed using spectrophotometric assays and ELISA. Diaphragm isotonic performance was assessed after six weeks of CH ± chronic antioxidant supplementation. Protein carbonyl and free thiol content in the diaphragm were increased and decreased respectively after six weeks of CH – indicative of protein oxidation. These changes were temporally modulated and muscle specific. Extensive remodelling of metabolic proteins occurred and the stress reached the cross-bridge. Metabolic enzyme activities in the diaphragm were, for the most part, decreased by CH and differential muscle responses were observed. Redox sensitive chymotrypsin-like proteasome activity of the diaphragm was increased and atrophy signalling was observed through decreased phospho-FOXO3a and phospho-mTOR. Phospho-p38 MAPK content was increased and this was attenuated by antioxidant treatment. Hypoxia decreased power generating capacity of the diaphragm and this was restored by N-acetyl-cysteine (NAC) but not by tempol. Redox remodelling is pivotal for diaphragm adaptation to chronic sustained hypoxia. Muscle changes are dependent on duration of the hypoxia stimulus, activity profile of the muscle, and molecular composition of the muscle. The working respiratory muscles and slow oxidative fibres are particularly susceptible. NAC (antioxidant) may be useful as an adjunct therapy in respiratory-related diseases characterised by hypoxic stress.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Kurzel(2004) points out that researchers in e-learning and educational technologists, in a quest to provide improved Learning Environments (LE) for students are focusing on personalising the experience through a Learning Management System (LMS) that attempts to tailor the LE to the individual (see amongst others Eklund & Brusilovsky, 1998; Kurzel, Slay, & Hagenus, 2003; Martinez,2000; Sampson, Karagiannidis, & Kinshuk, 2002; Voigt & Swatman; 2003). According to Kurzel (2004) this tailoring can have an impact on content and how it’s accessed; the media forms used; method of instruction employed and the learning styles supported. This project is aiming to move personalisation forward to the next generation, by tackling the issue of Personalised e-Learning platforms as pre-requisites for building and generating individualised learning solutions. The proposed development is to create an e-learning platform with personalisation built-in. This personalisation is proposed to be set from different levels of within the system starting from being guided by the information that the user inputs into the system down to the lower level of being set using information inferred by the system’s processing engine. This paper will discuss some of our early work and ideas.