9 resultados para Metadata

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Following earlier work demonstrating the utility of Orc as a means of specifying and reasoning about grid applications we propose the enhancement of such specifications with metadata that provide a means to extend an Orc specification with implementation oriented information. We argue that such specifications provide a useful refinement step in allowing reasoning about implementation related issues ahead of actual implementation or even prototyping. As examples, we demonstrate how such extended specifications can be used for investigating security related issues and for evaluating the cost of handling grid resource faults. The approach emphasises a semi-formal style of reasoning that makes maximum use of programmer domain knowledge and experience.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This book contains the Exif, XMP, and IPTC metadata extract ed from the 100 digital surrogates featured in Display At Your Own Risk, an online exhibition experiment. In some cases, the metadata is extensive, almost overwhelming; in others, little to no metadata was embedded in the digital surrogate's file at all. Preparing this book to accompany the Display At Your Own Risk exhibition made us realise that metadata can be beautiful. We hope you find beauty here too.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a multimodal detection and tracking algorithm for sensors composed of a camera mounted between two microphones. Target localization is performed on color-based change detection in the video modality and on time difference of arrival (TDOA) estimation between the two microphones in the audio modality. The TDOA is computed by multiband generalized cross correlation (GCC) analysis. The estimated directions of arrival are then postprocessed using a Riccati Kalman filter. The visual and audio estimates are finally integrated, at the likelihood level, into a particle filter (PF) that uses a zero-order motion model, and a weighted probabilistic data association (WPDA) scheme. We demonstrate that the Kalman filtering (KF) improves the accuracy of the audio source localization and that the WPDA helps to enhance the tracking performance of sensor fusion in reverberant scenarios. The combination of multiband GCC, KF, and WPDA within the particle filtering framework improves the performance of the algorithm in noisy scenarios. We also show how the proposed audiovisual tracker summarizes the observed scene by generating metadata that can be transmitted to other network nodes instead of transmitting the raw images and can be used for very low bit rate communication. Moreover, the generated metadata can also be used to detect and monitor events of interest.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The LIPARM schema links the parliamentary record together for the first time by creating a unified metadata scheme for all of its key elements. People, bills, acts, items of business, debates, divisions and sessions will all be described by the scheme and will be linked together across resources which are currently spread out and isolated. For the first time, it will be possible to trace a given MP’s entire voting record or to find every speech they made. It will be possible to follow the passage of every bill or act, and every contribution to the debates that accompany it. Both the historical and the contemporary record of parliamentary proceedings will become accessible in this way for the first time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The principle feature in the evolution of the internet has been its ever growing reach to include old and young, rich and poor. The internet’s ever encroaching presence has transported it from our desktop to our pocket and into our glasses. This is illustrated in the Internet Society Questionnaire on Multistakeholder Governance, which found the main factors affecting change in the Internet governance landscape were more users online from more countries and the influence of the internet over daily life. The omnipresence of the internet is self- perpetuating; its usefulness grows with every new user and every new piece of data uploaded. The advent of social media and the creation of a virtual presence for each of us, even when we are not physically present or ‘logged on’, means we are fast approaching the point where we are all connected, to everyone else, all the time. We have moved far beyond the point where governments can claim to represent our views which evolve constantly rather than being measured in electoral cycles.
The shift, which has seen citizens as creators of content rather than consumers of it, has undermined the centralist view of democracy and created an environment of wiki democracy or crowd sourced democracy. This is at the heart of what is generally known as Web 2.0, and widely considered to be a positive, democratising force. However, we argue, there are worrying elements here too. Government does not always deliver on the promise of the networked society as it involves citizens and others in the process of government. Also a number of key internet companies have emerged as powerful intermediaries harnessing the efforts of the many, and re- using and re-selling the products and data of content providers in the Web 2.0 environment. A discourse about openness and transparency has been offered as a democratising rationale but much of this masks an uneven relationship where the value of online activity flows not to the creators of content but to those who own the channels of communication and the metadata that they produce.
In this context the state is just one stakeholder in the mix of influencers and opinion formers impacting on our behaviours, and indeed our ideas of what is public. The question of what it means to create or own something, and how all these new relationships to be ordered and governed are subject to fundamental change. While government can often appear slow, unwieldy and even irrelevant in much of this context, there remains a need for some sort of political control to deal with the challenges that technology creates but cannot by itself control. In order for the internet to continue to evolve successfully both technically and socially it is critical that the multistakeholder nature of internet governance be understood and acknowledged, and perhaps to an extent, re- balanced. Stakeholders can no longer be classified in the broad headings of government, private sector and civil society, and their roles seen as some sort of benign and open co-production. Each user of the internet has a stake in its efficacy and each by their presence and participation is contributing to the experience, positive or negative of other users as well as to the commercial success or otherwise of various online service providers. However stakeholders have neither an equal role nor an equal share. The unequal relationship between the providers of content and those who simple package up and transmit that content - while harvesting the valuable data thus produced - needs to be addressed. Arguably this suggests a role for government that involves it moving beyond simply celebrating and facilitating the on- going technological revolution. This paper reviews the shifting landscape of stakeholders and their contribution to the efficacy of the internet. It will look to critically evaluate the primacy of the individual as the key stakeholder and their supposed developing empowerment within the ever growing sea of data. It also looks at the role of individuals in wider governance roles. Governments in a number of jurisdictions have sought to engage, consult or empower citizens through technology but in general these attempts have had little appeal. Citizens have been too busy engaging, consulting and empowering each other to pay much attention to what their governments are up to. George Orwell’s view of the future has not come to pass; in fact the internet has insured the opposite scenario has come to pass. There is no big brother but we are all looking over each other’s shoulder all the time, while at the same time a number of big corporations are capturing and selling all this collective endeavour back to us.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Long-term precipitation series are critical for understanding emerging changes to the hydrological cycle. To this end we construct a homogenized Island of Ireland Precipitation (IIP) network comprising 25 stations and a composite series covering the period 1850–2010, providing the second-longest regional precipitation archive in the British-Irish Isles. We expand the existing catalogue of long-term precipitation records for the island by recovering archived data for an additional eight stations. Following bridging and updating of stations HOMogenisation softwarE in R (HOMER) homogenization software is used to detect breaks using pairwise and joint detection. A total of 25 breakpoints are detected across 14 stations, and the majority (20) are corroborated by metadata. Assessment of variability and change in homogenized and extended precipitation records reveal positive (winter) and negative (summer) trends. Trends in records covering the typical period of digitization (1941 onwards) are not always representative of longer records. Furthermore, trends in post-homogenization series change magnitude and even direction at some stations. While cautionary flags are raised for some series, confidence in the derived network is high given attention paid to metadata, coherence of behaviour across the network and consistency of findings with other long-term climatic series such as England and Wales precipitation. As far as we are aware, this work represents the first application of HOMER to a long-term precipitation network and bodes well for use in other regions. It is expected that the homogenized IIP network will find wider utility in benchmarking and supporting climate services across the Island of Ireland, a sentinel location in the North Atlantic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Discussion forums have evolved into a dependablesource of knowledge to solvecommon problems. However, only a minorityof the posts in discussion forumsare solution posts. Identifying solutionposts from discussion forums, hence, is animportant research problem. In this paper,we present a technique for unsupervisedsolution post identification leveraginga so far unexplored textual feature, thatof lexical correlations between problemsand solutions. We use translation modelsand language models to exploit lexicalcorrelations and solution post characterrespectively. Our technique is designedto not rely much on structural featuressuch as post metadata since suchfeatures are often not uniformly availableacross forums. Our clustering-based iterativesolution identification approach basedon the EM-formulation performs favorablyin an empirical evaluation, beatingthe only unsupervised solution identificationtechnique from literature by a verylarge margin. We also show that our unsupervisedtechnique is competitive againstmethods that require supervision, outperformingone such technique comfortably.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The newly updated inventory of palaeoecological research in Latin America offers an important overview of sites available for multi-proxy and multi-site purposes. From the collected literature supporting this inventory, we collected all available age model metadata to create a chronological database of 5116 control points (e.g. 14C, tephra, fission track, OSL, 210Pb) from 1097 pollen records. Based on this literature review, we present a summary of chronological dating and reporting in the Neotropics. Difficulties and recommendations for chronology reporting are discussed. Furthermore, for 234 pollen records in northwest South America, a classification system for age uncertainties is implemented based on chronologies generated with updated calibration curves. With these outcomes age models are produced for those sites without an existing chronology, alternative age models are provided for researchers interested in comparing the effects of different calibration curves and age–depth modelling software, and the importance of uncertainty assessments of chronologies is highlighted. Sample resolution and temporal uncertainty of ages are discussed for different time windows, focusing on events relevant for research on centennial- to millennial-scale climate variability. All age models and developed R scripts are publicly available through figshare, including a manual to use the scripts.