900 resultados para applicazione web, semantic web, semantic publishing, angularJS, user experience, usabilità
Resumo:
Currently the media have made many new tools on their websites in order to broaden the dialogue with its users, a feature that has been called interactivity. The objective of this research is to describe the interactive resources of Chilean media websites. The analysis was conducted at 20 sites using a pattern of six dimensions with interactive forms which are today using identified. The findings indicate that digital media Chileans are expanding the possibilities of dialogue with users on social media, especially Twitter and Facebook, and the mediauser interaction is monological, that is to say, from the media to the user, but with very low feedback.
Resumo:
Gracias al crecimiento, expansión y popularización de la World Wide Web, su desarrollo tecnológico tiene una creciente importancia en la sociedad. La simbiosis que protagonizan estos dos entornos ha propiciado una mayor influencia social en las innovaciones de la plataforma y un enfoque mucho más práctico. Nuestro objetivo en este artículo es describir, caracterizar y analizar el surgimiento y difusión del nuevo estándar de hipertexto que rige la Web; HTML5. Al mismo tiempo exploramos este proceso a la luz de varias teorías que aúnan tecnología y sociedad. Dedicamos especial atención a los usuarios de la World Wide Web y al uso genérico que realizan de los Medios Sociales o "Social Media". Sugerimos que el desarrollo de los estándares web está influenciado por el uso cotidiano de este nuevo tipo de tecnologías y aplicaciones.
Resumo:
A search query, being a very concise grounding of user intent, could potentially have many possible interpretations. Search engines hedge their bets by diversifying top results to cover multiple such possibilities so that the user is likely to be satisfied, whatever be her intended interpretation. Diversified Query Expansion is the problem of diversifying query expansion suggestions, so that the user can specialize the query to better suit her intent, even before perusing search results. We propose a method, Select-Link-Rank, that exploits semantic information from Wikipedia to generate diversified query expansions. SLR does collective processing of terms and Wikipedia entities in an integrated framework, simultaneously diversifying query expansions and entity recommendations. SLR starts with selecting informative terms from search results of the initial query, links them to Wikipedia entities, performs a diversity-conscious entity scoring and transfers such scoring to the term space to arrive at query expansion suggestions. Through an extensive empirical analysis and user study, we show that our method outperforms the state-of-the-art diversified query expansion and diversified entity recommendation techniques.
Resumo:
Abstract The World Wide Web Consortium, W3C, is known for standards like HTML and CSS but there's a lot more to it than that. Mobile, automotive, publishing, graphics, TV and more. Then there are horizontal issues like privacy, security, accessibility and internationalisation. Many of these assume that there is an underlying data infrastructure to power applications. In this session, W3C's Data Activity Lead, Phil Archer, will describe the overall vision for better use of the Web as a platform for sharing data and how that translates into recent, current and possible future work. What's the difference between using the Web as a data platform and as a glorified USB stick? Why does it matter? And what makes a standard a standard anyway? Speaker Biography Phil Archer Phil Archer is Data Activity Lead at W3C, the industry standards body for the World Wide Web, coordinating W3C's work in the Semantic Web and related technologies. He is most closely involved in the Data on the Web Best Practices, Permissions and Obligations Expression and Spatial Data on the Web Working Groups. His key themes are interoperability through common terminology and URI persistence. As well as work at the W3C, his career has encompassed broadcasting, teaching, linked data publishing, copy writing, and, perhaps incongruously, countryside conservation. The common thread throughout has been a knack for communication, particularly communicating complex technical ideas to a more general audience.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Les problématiques de surplus de poids sont en augmentation depuis les dernières décennies, notamment chez les jeunes québécois. Cette augmentation est en lien avec des habitudes alimentaires présentant des différences importantes avec les recommandations nutritionnelles. De plus, le gouvernement provincial a instauré des changements importants au Programme de formation de l’école québécoise afin de stimuler l’adoption de saines habitudes de vie. Afin de contrer ces problématiques de surplus de poids et d’habitudes alimentaires déficientes et de poursuivre dans la lignée de la réforme scolaire, le Nutriathlon en équipe version Web a été développé. Ce programme a pour but d’amener chaque participant à améliorer la qualité de son alimentation en augmentant et en diversifiant sa consommation de légumes, de fruits et de produits laitiers. Les objectifs de la présente étude sont (1) d’évaluer l’impact du programme sur la consommation de légumes, de fruits (LF) et de produits laitiers (PL) d’élèves du secondaire et (2) d’évaluer les facteurs influençant la réussite du programme chez ces jeunes. Les résultats de l’étude ont démontré que pendant le programme ainsi qu’immédiatement après, le groupe intervention a rapporté une augmentation significative de la consommation de LF et de PL par rapport au groupe contrôle. Par contre, aucun effet n’a pu être observé à moyen terme. Quant aux facteurs facilitant le succès du Nutriathlon en équipe, les élèves ont mentionné : l’utilisation de la technologie pour la compilation des portions, la formation d’équipes, l’implication des enseignants et de l’entourage familial ainsi que la création de stratégies pour faciliter la réussite du programme. Les élèves ont également mentionné des barrières au succès du Nutriathlon en équipe telles que le manque d’assiduité à saisir leurs données en dehors des heures de classe, la dysfonction du code d’utilisateur et l’incompatibilité de la plateforme avec certains outils technologiques comme les tablettes.
Resumo:
The business system known as Pyramid does today not provide its user with a reasonable system regarding case management for support issues. The current system in place requires the customer to contact its provider via telephone to register new cases. In addition to this, current system doesn’t include any way for the user to view any of their current cases without contacting the provider.A solution to this issue is to migrate the current case management system from a telephone contact to a web based platform, where customers could easier access their current cases, but also directly through the website create new cases. This new system would reduce the time required to manually manage each individual case, for both customer and provider, resulting in an overall reduction in cost for both parties.The result is a system divided into two different sections, the first one is an API created in Pyramid that acts as a web service, and the second one a website which customers can connect to. The website will allow users to overview their current cases, but also the option to create new cases directly through the site. All the information used to the website is obtained through the web service inside Pyramid. Analyzing the final design of the system, the developers where able to conclude both positive and negative aspects of the systems’ final design. If the platform chosen was the optimal choice or not, and also what can be include if the system is further developed, will be discussed.The development process and the method used during development will also be analyzed and discussed, what positive and negative aspects that where encountered. In addition to this the cause and effect of a development team smaller than the suggested size will also be analyzed. Lastly an analysis of actions that could’ve been made in order to prevent certain issues from occurring will.
Resumo:
A utilização das TIC ocupam um lugar cada vez mais importante nas nossas escolas, marcado sobretudo pela evolução das tecnologias e pela utilização em contexto educativo de muitas ferramentas da Web 2.0. Esse facto é muito notório na disciplina de Educação Visual e Tecnológica, de carácter eminentemente prático, onde é permitido explorar várias ferramentas digitais para abordagem de conteúdos da disciplina e para a criação de produtos gráficos e plásticos. Com o aparecimento da Web 2.0 e a disponibilização de milhares de novas ferramentas digitais aos utilizadores da Internet, estimula-se um interesse cada vez maior na adoção de metodologias e estratégias com recurso a estes media e que suportem uma aprendizagem mais eficaz e motivadora para os alunos, articulando-se os suportes tradicionais de EVT com os novos media digitais. Neste contexto, o presente estudo é o resultado duma investigação-ação realizada no âmbito do Programa Doutoral em Multimédia em Educação da Universidade de Aveiro onde se implementou a integração de ferramentas da Web, Web 2.0 e Software Livre em contexto educativo na disciplina de EVT, na qual poderiam ser utilizadas tanto as técnicas tradicionais de realização mais usuais na disciplina como a integração e articulação com as ferramentas digitais, suportadas por software livre (e outros de utilização gratuita), a Web e a Web 2.0 para suporte ao ensino e aprendizagem dos diversos conteúdos e áreas de exploração da disciplina. Este estudo, desenhado em três ciclos, envolveu num primeiro momento a constituição de uma comunidade de prática de professores alargada, sendo criadas seis turmas de formação que reuniram um total de 112 professores que pretendiam integrar as ferramentas digitais em EVT. Para além da pesquisa, análise, seleção e catalogação destas 430 ferramentas digitais recenseadas, produziram-se 371 manuais de apoio à utilização das mesmas, sendo estes recursos disponibilizados no espaço do EVTdigital. Num segundo ciclo, decorrente da avaliação realizada, foi criada a distribuição EVTux para simplificar o acesso e utilização das ferramentas digitais em contexto de EVT. Finalmente, o terceiro ciclo, decorre da eliminação da disciplina de EVT do currículo do 2º ciclo do ensino básico e a sua substituição por duas novas disciplinas, tendo-se realizada a respetiva análise de conteúdo das metas curriculares e produzido a aplicação As ferramentas digitais do Mundo Visual, concebida para contextualizar e indexar as ferramentas digitais selecionadas para a nova disciplina de Educação Visual.Os resultados deste estudo apontam claramente para a possibilidade de integrar na disciplina de Educação Visual e Tecnológica (ou no presente momento, em Educação Visual) ferramentas digitais para abordagem aos conteúdos e áreas de exploração, bem como a possibilidade de se constituírem facilmente comunidades de prática (como foi o caso) que possam colaborar na catalogação destas ferramentas no contexto específico da disciplina e para a necessidade sentida pelos professores em obter informação e formação que os possa atualizar quanto à integração das TIC no currículo. Apresentam-se, ainda, as limitações deste estudo que passaram sobretudo pelo impacto negativo que a eliminação da disciplina provocou na motivação dos docentes e a sua consequente participação no decorrer de algumas fases do trabalho, e ainda da dificuldade de gestão de uma equipa de professores colaboradores tão numerosa e diversificada. Nesse sentido, são também apresentadas sugestões para estudos futuros.
Resumo:
Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.
Resumo:
A collaboration between dot.rural at the University of Aberdeen and the iSchool at Northumbria University, POWkist is a pilot-study exploring potential usages of currently available linked datasets within the cultural heritage domain. Many privately-held family history collections (shoebox archives) remain vulnerable unless a sustainable, affordable and accessible model of citizen-archivist digital preservation can be offered. Citizen-historians have used the web as a platform to preserve cultural heritage, however with no accessible or sustainable model these digital footprints have been ad hoc and rarely connected to broader historical research. Similarly, current approaches to connecting material on the web by exploiting linked datasets do not take into account the data characteristics of the cultural heritage domain. Funded by Semantic Media, the POWKist project is investigating how best to capture, curate, connect and present the contents of citizen-historians’ shoebox archives in an accessible and sustainable online collection. Using the Curios platform - an open-source digital archive - we have digitised a collection relating to a prisoner of war during WWII (1939-1945). Following a series of user group workshops, POWkist is now connecting these ‘made digital’ items with the broader web using a semantic technology model and identifying appropriate linked datasets of relevant content such as DBPedia (an archived linked dataset of Wikipedia) and Ordnance Survey Open Data. We are analysing the characteristics of cultural heritage linked datasets, so that these materials are better visualised, contextualised and presented in an attractive and comprehensive user interface. Our paper will consider the issues we have identified, the solutions we are developing and include a demonstration of our work-in-progress.
Resumo:
Este artículo presenta el análisis de un Algoritmo de Inferencia Semántica utilizado en un Sistema de Recomendación de Contenidos Audiovisuales en el contexto de la Televisión Digital. Los resultados obtenidos muestran que la inclusión de diferentes propiedades semánticas y sus combinaciones, influye directamente en la reducción del error absoluto promedio obtenido en la predicción de la calificación otorgada por un usuario a un ítem determinado. Además se ha determinado que la propiedad Actor tiene un impacto mayor con respecto a otras propiedades analizadas.
Resumo:
A group of four applications including Top 20 Pedestrian Crash Locations: This application is designed to display top 20 pedestrian crash locations into both map- view and detailed information view. FDOT Crash Reporting Tool: This application is designed to simplify the usage and sharing of CAR data. The application can load raw data from CAR and display it into a web map interface. FDOT Online Document Portal: This application is designed for FDOT project managers to be able to share and manage documents through a user friendly, GIS enable web interface GIS Data Collection for Pedestrian Safety Tool: FIU-GIS Center was responsible for data collection and processing work for the project of Pedestrian Safety Tool Project. The outcome of this task is present by a simple web-GIS application design to host GIS by projects.
Resumo:
With the exponential growth of the usage of web-based map services, the web GIS application has become more and more popular. Spatial data index, search, analysis, visualization and the resource management of such services are becoming increasingly important to deliver user-desired Quality of Service. First, spatial indexing is typically time-consuming and is not available to end-users. To address this, we introduce TerraFly sksOpen, an open-sourced an Online Indexing and Querying System for Big Geospatial Data. Integrated with the TerraFly Geospatial database [1-9], sksOpen is an efficient indexing and query engine for processing Top-k Spatial Boolean Queries. Further, we provide ergonomic visualization of query results on interactive maps to facilitate the user’s data analysis. Second, due to the highly complex and dynamic nature of GIS systems, it is quite challenging for the end users to quickly understand and analyze the spatial data, and to efficiently share their own data and analysis results with others. Built on the TerraFly Geo spatial database, TerraFly GeoCloud is an extra layer running upon the TerraFly map and can efficiently support many different visualization functions and spatial data analysis models. Furthermore, users can create unique URLs to visualize and share the analysis results. TerraFly GeoCloud also enables the MapQL technology to customize map visualization using SQL-like statements [10]. Third, map systems often serve dynamic web workloads and involve multiple CPU and I/O intensive tiers, which make it challenging to meet the response time targets of map requests while using the resources efficiently. Virtualization facilitates the deployment of web map services and improves their resource utilization through encapsulation and consolidation. Autonomic resource management allows resources to be automatically provisioned to a map service and its internal tiers on demand. v-TerraFly are techniques to predict the demand of map workloads online and optimize resource allocations, considering both response time and data freshness as the QoS target. The proposed v-TerraFly system is prototyped on TerraFly, a production web map service, and evaluated using real TerraFly workloads. The results show that v-TerraFly can accurately predict the workload demands: 18.91% more accurate; and efficiently allocate resources to meet the QoS target: improves the QoS by 26.19% and saves resource usages by 20.83% compared to traditional peak load-based resource allocation.
Resumo:
Conventional web search engines are centralised in that a single entity crawls and indexes the documents selected for future retrieval, and the relevance models used to determine which documents are relevant to a given user query. As a result, these search engines suffer from several technical drawbacks such as handling scale, timeliness and reliability, in addition to ethical concerns such as commercial manipulation and information censorship. Alleviating the need to rely entirely on a single entity, Peer-to-Peer (P2P) Information Retrieval (IR) has been proposed as a solution, as it distributes the functional components of a web search engine – from crawling and indexing documents, to query processing – across the network of users (or, peers) who use the search engine. This strategy for constructing an IR system poses several efficiency and effectiveness challenges which have been identified in past work. Accordingly, this thesis makes several contributions towards advancing the state of the art in P2P-IR effectiveness by improving the query processing and relevance scoring aspects of a P2P web search. Federated search systems are a form of distributed information retrieval model that route the user’s information need, formulated as a query, to distributed resources and merge the retrieved result lists into a final list. P2P-IR networks are one form of federated search in routing queries and merging result among participating peers. The query is propagated through disseminated nodes to hit the peers that are most likely to contain relevant documents, then the retrieved result lists are merged at different points along the path from the relevant peers to the query initializer (or namely, customer). However, query routing in P2P-IR networks is considered as one of the major challenges and critical part in P2P-IR networks; as the relevant peers might be lost in low-quality peer selection while executing the query routing, and inevitably lead to less effective retrieval results. This motivates this thesis to study and propose query routing techniques to improve retrieval quality in such networks. Cluster-based semi-structured P2P-IR networks exploit the cluster hypothesis to organise the peers into similar semantic clusters where each such semantic cluster is managed by super-peers. In this thesis, I construct three semi-structured P2P-IR models and examine their retrieval effectiveness. I also leverage the cluster centroids at the super-peer level as content representations gathered from cooperative peers to propose a query routing approach called Inverted PeerCluster Index (IPI) that simulates the conventional inverted index of the centralised corpus to organise the statistics of peers’ terms. The results show a competitive retrieval quality in comparison to baseline approaches. Furthermore, I study the applicability of using the conventional Information Retrieval models as peer selection approaches where each peer can be considered as a big document of documents. The experimental evaluation shows comparative and significant results and explains that document retrieval methods are very effective for peer selection that brings back the analogy between documents and peers. Additionally, Learning to Rank (LtR) algorithms are exploited to build a learned classifier for peer ranking at the super-peer level. The experiments show significant results with state-of-the-art resource selection methods and competitive results to corresponding classification-based approaches. Finally, I propose reputation-based query routing approaches that exploit the idea of providing feedback on a specific item in the social community networks and manage it for future decision-making. The system monitors users’ behaviours when they click or download documents from the final ranked list as implicit feedback and mines the given information to build a reputation-based data structure. The data structure is used to score peers and then rank them for query routing. I conduct a set of experiments to cover various scenarios including noisy feedback information (i.e, providing positive feedback on non-relevant documents) to examine the robustness of reputation-based approaches. The empirical evaluation shows significant results in almost all measurement metrics with approximate improvement more than 56% compared to baseline approaches. Thus, based on the results, if one were to choose one technique, reputation-based approaches are clearly the natural choices which also can be deployed on any P2P network.
Resumo:
A evolução tecnológica tem provocado uma evolução na medicina, através de sistemas computacionais voltados para o armazenamento, captura e disponibilização de informações médicas. Os relatórios médicos são, na maior parte das vezes, guardados num texto livre não estruturado e escritos com vocabulário proprietário, podendo ocasionar falhas de interpretação. Através das linguagens da Web Semântica, é possível utilizar antologias como modo de estruturar e padronizar a informação dos relatórios médicos, adicionando¬ lhe anotações semânticas. A informação contida nos relatórios pode desta forma ser publicada na Web, permitindo às máquinas o processamento automático da informação. No entanto, o processo de criação de antologias é bastante complexo, pois existe o problema de criar uma ontologia que não cubra todo o domínio pretendido. Este trabalho incide na criação de uma ontologia e respectiva povoação, através de técnicas de PLN e Aprendizagem Automática que permitem extrair a informação dos relatórios médicos. Foi desenvolvida uma aplicação, que permite ao utilizador converter relatórios do formato digital para o formato OWL. ABSTRACT: Technological evolution has caused a medicine evolution through computer systems which allow storage, gathering and availability of medical information. Medical reports are, most of the times, stored in a non-structured free text and written in a personal way so that misunderstandings may occur. Through Semantic Web languages, it’s possible to use ontology as a way to structure and standardize medical reports information by adding semantic notes. The information in those reports can, by these means, be displayed on the web, allowing machines automatic information processing. However, the process of creating ontology is very complex, as there is a risk creating of an ontology that not covering the whole desired domain. This work is about creation of an ontology and its population through NLP and Machine Learning techniques to extract information from medical reports. An application was developed which allows the user to convert reports from digital for¬ mat to OWL format.