754 resultados para content centric
Resumo:
The process of developing software that takes advantage of multiple processors is commonly referred to as parallel programming. For various reasons, this process is much harder than the sequential case. For decades, parallel programming has been a problem for a small niche only: engineers working on parallelizing mostly numerical applications in High Performance Computing. This has changed with the advent of multi-core processors in mainstream computer architectures. Parallel programming in our days becomes a problem for a much larger group of developers. The main objective of this thesis was to find ways to make parallel programming easier for them. Different aims were identified in order to reach the objective: research the state of the art of parallel programming today, improve the education of software developers about the topic, and provide programmers with powerful abstractions to make their work easier. To reach these aims, several key steps were taken. To start with, a survey was conducted among parallel programmers to find out about the state of the art. More than 250 people participated, yielding results about the parallel programming systems and languages in use, as well as about common problems with these systems. Furthermore, a study was conducted in university classes on parallel programming. It resulted in a list of frequently made mistakes that were analyzed and used to create a programmers' checklist to avoid them in the future. For programmers' education, an online resource was setup to collect experiences and knowledge in the field of parallel programming - called the Parawiki. Another key step in this direction was the creation of the Thinking Parallel weblog, where more than 50.000 readers to date have read essays on the topic. For the third aim (powerful abstractions), it was decided to concentrate on one parallel programming system: OpenMP. Its ease of use and high level of abstraction were the most important reasons for this decision. Two different research directions were pursued. The first one resulted in a parallel library called AthenaMP. It contains so-called generic components, derived from design patterns for parallel programming. These include functionality to enhance the locks provided by OpenMP, to perform operations on large amounts of data (data-parallel programming), and to enable the implementation of irregular algorithms using task pools. AthenaMP itself serves a triple role: the components are well-documented and can be used directly in programs, it enables developers to study the source code and learn from it, and it is possible for compiler writers to use it as a testing ground for their OpenMP compilers. The second research direction was targeted at changing the OpenMP specification to make the system more powerful. The main contributions here were a proposal to enable thread-cancellation and a proposal to avoid busy waiting. Both were implemented in a research compiler, shown to be useful in example applications, and proposed to the OpenMP Language Committee.
Resumo:
Recently, research projects such as PADLR and SWAP have developed tools like Edutella or Bibster, which are targeted at establishing peer-to-peer knowledge management (P2PKM) systems. In such a system, it is necessary to obtain provide brief semantic descriptions of peers, so that routing algorithms or matchmaking processes can make decisions about which communities peers should belong to, or to which peers a given query should be forwarded. This paper proposes the use of graph clustering techniques on knowledge bases for that purpose. Using this clustering, we can show that our strategy requires up to 58% fewer queries than the baselines to yield full recall in a bibliographic P2PKM scenario.
Resumo:
Dr. Maria N Ivanova; Professor Dr. Christoph Scherrer
Resumo:
Plain Text - ASCII, Unicode, UTF-8 Content Formats - XML-based formats (RSS, MathML, SVG, Office) + PDF Text based data formats: CSV, JSON
Resumo:
There is a wealth of open educational content in audio and video formats available via iTunes U, one of the services offered especially for education via iTunes. There are details of how to get started as well as an informative video to help you. Details of how to get started with sharing content can be found for developers.
Resumo:
This is the first part of a 2 part video from my talk in May 2008 on open source content creation.
Resumo:
This is the second part of a 2 part video from my talk in May 2008 on open source content creation. Here I am talking about the Making of Doljer
Resumo:
This document outlines the material covered by the main UK exam board specifications at A-level in chemistry. This is for the A-level taught up until and including June 2009 (i.e. relevant to undergraduates arriving at university in October 2009).
Resumo:
This document is a review of the content of the A-level Chemistry specifications from the main UK exam boards (Scottish highers not included - sorry!). These A-level specifications commenced teaching in September 2008. Students entering university in 2010 will have studied the new A-levels, and this document is intended to help academics to identify what students will have covered. The document also contains a summary of discussions which took place between teachers and academics at our annual Post-16 teachers' day in June 2010 regarding the nature of the 2010 intake and their capabilities in chemistry. Please inform us of any errors or typos that you spot and we'll update the document. LAST UPDATE at 13:15 on Aug 27th 2010.
Resumo:
The use of electronic documents is constantly growing and the necessity to implement an ad-hoc eCertificate which manages access to private information is not only required but also necessary. This paper presents a protocol for the management of electronic identities (eIDs), meant as a substitute for the paper-based IDs, in a mobile environment with a user-centric approach. Mobile devices have been chosen because they provide mobility, personal use and high computational complexity. The inherent user-centricity also allows the user to personally manage the ID information and to display only what is required. The chosen path to develop the protocol is to migrate the existing eCert technologies implemented by the Learning Societies Laboratory in Southampton. By comparing this protocol with the analysis of the eID problem domain, a new solution has been derived which is compatible with both systems without loss of features.
Resumo:
A Table of Contents can be tweaked so that it picks up the content from only part of a file (such as an Appendix). This video shows you how to make such a change to a Table of Contents that is based upon Heading Styles. For best viewing Download the video.
Resumo:
Description of student generated content which you can choose to submit
Resumo:
Getting content from server to client can be more complicated than we have discussed so far. This lecture discusses how caching and content delivery networks help to make the Web work.
Resumo:
Abstract 1: Social Networks such as Twitter are often used for disseminating and collecting information during natural disasters. The potential for its use in Disaster Management has been acknowledged. However, more nuanced understanding of the communications that take place on social networks are required to more effectively integrate this information into the processes within disaster management. The type and value of information shared should be assessed, determining the benefits and issues, with credibility and reliability as known concerns. Mapping the tweets in relation to the modelled stages of a disaster can be a useful evaluation for determining the benefits/drawbacks of using data from social networks, such as Twitter, in disaster management.A thematic analysis of tweets’ content, language and tone during the UK Storms and Floods 2013/14 was conducted. Manual scripting was used to determine the official sequence of events, and classify the stages of the disaster into the phases of the Disaster Management Lifecycle, to produce a timeline. Twenty- five topics discussed on Twitter emerged, and three key types of tweets, based on the language and tone, were identified. The timeline represents the events of the disaster, according to the Met Office reports, classed into B. Faulkner’s Disaster Management Lifecycle framework. Context is provided when observing the analysed tweets against the timeline. This illustrates a potential basis and benefit for mapping tweets into the Disaster Management Lifecycle phases. Comparing the number of tweets submitted in each month with the timeline, suggests users tweet more as an event heightens and persists. Furthermore, users generally express greater emotion and urgency in their tweets.This paper concludes that the thematic analysis of content on social networks, such as Twitter, can be useful in gaining additional perspectives for disaster management. It demonstrates that mapping tweets into the phases of a Disaster Management Lifecycle model can have benefits in the recovery phase, not just in the response phase, to potentially improve future policies and activities. Abstract2: The current execution of privacy policies, as a mode of communicating information to users, is unsatisfactory. Social networking sites (SNS) exemplify this issue, attracting growing concerns regarding their use of personal data and its effect on user privacy. This demonstrates the need for more informative policies. However, SNS lack the incentives required to improve policies, which is exacerbated by the difficulties of creating a policy that is both concise and compliant. Standardization addresses many of these issues, providing benefits for users and SNS, although it is only possible if policies share attributes which can be standardized. This investigation used thematic analysis and cross- document structure theory, to assess the similarity of attributes between the privacy policies (as available in August 2014), of the six most frequently visited SNS globally. Using the Jaccard similarity coefficient, two types of attribute were measured; the clauses used by SNS and the coverage of forty recommendations made by the UK Information Commissioner’s Office. Analysis showed that whilst similarity in the clauses used was low, similarity in the recommendations covered was high, indicating that SNS use different clauses, but to convey similar information. The analysis also showed that low similarity in the clauses was largely due to differences in semantics, elaboration and functionality between SNS. Therefore, this paper proposes that the policies of SNS already share attributes, indicating the feasibility of standardization and five recommendations are made to begin facilitating this, based on the findings of the investigation.