992 resultados para Metadata


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Event-based systems are seen as good candidates for supporting distributed applications in dynamic and ubiquitous environments because they support decoupled and asynchronous many-to-many information dissemination. Event systems are widely used, because asynchronous messaging provides a flexible alternative to RPC (Remote Procedure Call). They are typically implemented using an overlay network of routers. A content-based router forwards event messages based on filters that are installed by subscribers and other routers. The filters are organized into a routing table in order to forward incoming events to proper subscribers and neighbouring routers. This thesis addresses the optimization of content-based routing tables organized using the covering relation and presents novel data structures and configurations for improving local and distributed operation. Data structures are needed for organizing filters into a routing table that supports efficient matching and runtime operation. We present novel results on dynamic filter merging and the integration of filter merging with content-based routing tables. In addition, the thesis examines the cost of client mobility using different protocols and routing topologies. We also present a new matching technique called temporal subspace matching. The technique combines two new features. The first feature, temporal operation, supports notifications, or content profiles, that persist in time. The second feature, subspace matching, allows more expressive semantics, because notifications may contain intervals and be defined as subspaces of the content space. We also present an application of temporal subspace matching pertaining to metadata-based continuous collection and object tracking.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis examines whether the rules for of evidence, which were developed around paper over centuries, are adequate for the authentication of electronic evidence. The history of documentary evidence is examined, and the nature of electronic evidence is explored, particularly recent types of electronic evidence such as social media and 'the Cloud'. The old rules are then critically applied to the varied types of electronic evidence to determine whether or not these old rules are indeed adequate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The vastly increased popularity of the Internet as an effective publication and distribution channel of digital works has created serious challenges to enforcing intellectual property rights. Works are widely disseminated on the Internet, with and without permission. This thesis examines the current problems with licence management and copy protection and outlines a new method and system that solve these problems. The WARP system (Works, Authors, Royalties, and Payments) is based on global registration and transfer monitoring of digital works, and accounting and collection of Internet levy funded usage fees payable to the authors and right holders of the works. The detection and counting of downloads is implemented with origrams, short and original parts picked from the contents of the digital work. The origrams are used to create digests, digital fingerprints that identify the piece of work transmitted over the Internet without the need to embed ID tags or any other easily removable metadata in the file.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current smartphones have a storage capacity of several gigabytes. More and more information is stored on mobile devices. To meet the challenge of information organization, we turn to desktop search. Users often possess multiple devices, and synchronize (subsets of) information between them. This makes file synchronization more important. This thesis presents Dessy, a desktop search and synchronization framework for mobile devices. Dessy uses desktop search techniques, such as indexing, query and index term stemming, and search relevance ranking. Dessy finds files by their content, metadata, and context information. For example, PDF files may be found by their author, subject, title, or text. EXIF data of JPEG files may be used in finding them. User–defined tags can be added to files to organize and retrieve them later. Retrieved files are ranked according to their relevance to the search query. The Dessy prototype uses the BM25 ranking function, used widely in information retrieval. Dessy provides an interface for locating files for both users and applications. Dessy is closely integrated with the Syxaw file synchronizer, which provides efficient file and metadata synchronization, optimizing network usage. Dessy supports synchronization of search results, individual files, and directory trees. It allows finding and synchronizing files that reside on remote computers, or the Internet. Dessy is designed to solve the problem of efficient mobile desktop search and synchronization, also supporting remote and Internet search. Remote searches may be carried out offline using a downloaded index, or while connected to the remote machine on a weak network. To secure user data, transmissions between the Dessy client and server are encrypted using symmetric encryption. Symmetric encryption keys are exchanged with RSA key exchange. Dessy emphasizes extensibility. Also the cryptography can be extended. Users may tag their files with context tags and control custom file metadata. Adding new indexed file types, metadata fields, ranking methods, and index types is easy. Finding files is done with virtual directories, which are views into the user’s files, browseable by regular file managers. On mobile devices, the Dessy GUI provides easy access to the search and synchronization system. This thesis includes results of Dessy synchronization and search experiments, including power usage measurements. Finally, Dessy has been designed with mobility and device constraints in mind. It requires only MIDP 2.0 Mobile Java with FileConnection support, and Java 1.5 on desktop machines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is shown that a method based on the principle of analytic continuation can be used to solve a set of inhomogeneous infinite simultaneous equations encountered in the analysis of surface acoustic wave propagation along the periodically perturbed surface of a piezoelectric medium.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article contributes to the discussion by analysing how users of the leading online 3D printing design repository Thingiverse manage their intellectual property (IP). 3D printing represents a fruitful case study for exploring the relationship between IP norms and practitioner culture. Although additive manufacturing technology has existed for decades, 3D printing is on the cusp of a breakout into the technological mainstream – hardware prices are falling; designs are circulating widely; consumer-friendly platforms are multiplying; and technological literacy is rising. Analysing metadata from more than 68,000 Thingiverse design files collected from the site, we examine the licensing choices made by users and explore the way this shapes the sharing practices of the site’s users. We also consider how these choices and practices connect with wider attitudes towards sharing and intellectual property in 3D printing communities. A particular focus of the article is how Thingiverse structures its regulatory framework to avoid IP liability, and the extent to which this may have a bearing on users’ conduct. The paper has three sections. First, we will offer a description of Thingiverse and how it operates in the 3D printing ecosystem, noting the legal issues that have arisen regarding Thingiverse’s Terms of Use and its allocation of intellectual property rights. Different types of Thingiverse licences will be detailed and explained. Second, the empirical metadata we have collected from Thingiverse will be presented, including the methods used to obtain this information. Third, we will present findings from this data on licence choice and the public availability of user designs. Fourth, we will look at the implications of these findings and our conclusions regarding the particular kind of sharing ethic that is present in Thingiverse; we also consider the “closed” aspects of this community and what this means for current debates about “open” innovation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper introduces the META-NORD project which develops Nordic and Baltic part of the European open language resource infrastructure. META-NORD works on assembling, linking across languages, and making widely available the basic language resources used by developers, professionals and researchers to build specific products and applications. The goals of the project, overall approach and specific focus lines on wordnets, terminology resources and treebanks are described. Moreover, results achieved in first five months of the project, i.e. language whitepapers, metadata specification and IPR, are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many of the research institutions and universities across the world are facilitating open-access (OA) to their intellectual outputs through their respective OA institutional repositories (IRs) or through the centralized subject-based repositories. The registry of open access repositories (ROAR) lists more than 2850 such repositories across the world. The awareness about the benefits of OA to scholarly literature and OA publishing is picking up in India, too. As per the ROAR statistics, to date, there are more than 90 OA repositories in the country. India is doing particularly well in publishing open-access journals (OAJ). As per the directory of open-access journals (DOAJ), to date, India with 390 OAJs, is ranked 5th in the world in terms of numbers of OAJs being published. Much of the research done in India is reported in the journals published from India. These journals have limited readership and many of them are not being indexed by Web of Science, Scopus or other leading international abstracting and indexing databases. Consequently, research done in the country gets hidden not only from the fellow countrymen, but also from the international community. This situation can be easily overcome if all the researchers facilitate OA to their publications. One of the easiest ways to facilitate OA to scientific literature is through the institutional repositories. If every research institution and university in India set up an open-access IR and ensure that copies of the final accepted versions of all the research publications are uploaded in the IRs, then the research done in India will get far better visibility. The federation of metadata from all the distributed, interoperable OA repositories in the country will serve as a window to the research done across the country. Federation of metadata from the distributed OAI-compliant repositories can be easily achieved by setting up harvesting software like the PKP Harvester. In this paper, we share our experience in setting up a prototype metadata harvesting service using the PKP harvesting software for the OAI-compliant repositories in India.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Maintaining metadata consistency is a critical issue in designing a filesystem. Although satisfactory solutions are available for filesystems residing on magnetic disks, these solutions may not give adequate performance for filesystems residing on flash devices. Prabhakaran et al. have designed a metadata consistency mechanism specifically for flash chips, called Transactional Flash1]. It uses cyclic commit mechanism to provide transactional abstractions. Although significant improvement over usual journaling techniques, this mechanism has certain drawbacks such as complex protocol and necessity to read whole flash during recovery, which slows down recovery process. In this paper we propose addition of thin journaling layer on top of Transactional Flash to simplify the protocol and speed up the recovery process. The simplified protocol named Quick Recovery Cyclic Commit (QRCC) uses journal stored on NOR flash for recovery. Our evaluations on actual raw flash card show that journal writes add negligible penalty compared to original Transactional Flash's write performance, while quick recovery is facilitated by journal in case of failures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The tonic is a fundamental concept in Indian art music. It is the base pitch, which an artist chooses in order to construct the melodies during a rg(a) rendition, and all accompanying instruments are tuned using the tonic pitch. Consequently, tonic identification is a fundamental task for most computational analyses of Indian art music, such as intonation analysis, melodic motif analysis and rg recognition. In this paper we review existing approaches for tonic identification in Indian art music and evaluate them on six diverse datasets for a thorough comparison and analysis. We study the performance of each method in different contexts such as the presence/absence of additional metadata, the quality of audio data, the duration of audio data, music tradition (Hindustani/Carnatic) and the gender of the singer (male/female). We show that the approaches that combine multi-pitch analysis with machine learning provide the best performance in most cases (90% identification accuracy on average), and are robust across the aforementioned contexts compared to the approaches based on expert knowledge. In addition, we also show that the performance of the latter can be improved when additional metadata is available to further constrain the problem. Finally, we present a detailed error analysis of each method, providing further insights into the advantages and limitations of the methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of scaling up data integration, such that new sources can be quickly utilized as they are discovered, remains elusive: Global schemas for integrated data are difficult to develop and expand, and schema and record matching techniques are limited by the fact that data and metadata are often under-specified and must be disambiguated by data experts. One promising approach is to avoid using a global schema, and instead to develop keyword search-based data integration-where the system lazily discovers associations enabling it to join together matches to keywords, and return ranked results. The user is expected to understand the data domain and provide feedback about answers' quality. The system generalizes such feedback to learn how to correctly integrate data. A major open challenge is that under this model, the user only sees and offers feedback on a few ``top-'' results: This result set must be carefully selected to include answers of high relevance and answers that are highly informative when feedback is given on them. Existing systems merely focus on predicting relevance, by composing the scores of various schema and record matching algorithms. In this paper, we show how to predict the uncertainty associated with a query result's score, as well as how informative feedback is on a given result. We build upon these foundations to develop an active learning approach to keyword search-based data integration, and we validate the effectiveness of our solution over real data from several very different domains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present Bi-Modal Cache - a flexible stacked DRAM cache organization which simultaneously achieves several objectives: (i) improved cache hit ratio, (ii) moving the tag storage overhead to DRAM, (iii) lower cache hit latency than tags-in-SRAM, and (iv) reduction in off-chip bandwidth wastage. The Bi-Modal Cache addresses the miss rate versus off-chip bandwidth dilemma by organizing the data in a bi-modal fashion - blocks with high spatial locality are organized as large blocks and those with little spatial locality as small blocks. By adaptively selecting the right granularity of storage for individual blocks at run-time, the proposed DRAM cache organization is able to make judicious use of the available DRAM cache capacity as well as reduce the off-chip memory bandwidth consumption. The Bi-Modal Cache improves cache hit latency despite moving the metadata to DRAM by means of a small SRAM based Way Locator. Further by leveraging the tremendous internal bandwidth and capacity that stacked DRAM organizations provide, the Bi-Modal Cache enables efficient concurrent accesses to tags and data to reduce hit time. Through detailed simulations, we demonstrate that the Bi-Modal Cache achieves overall performance improvement (in terms of Average Normalized Turnaround Time (ANTT)) of 10.8%, 13.8% and 14.0% in 4-core, 8-core and 16-core workloads respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Resumen: En el dominio de la educación existe gran cantidad y diversidad de material multimedial que puede ser utilizado en la enseñanza y que constituye una importante contribución al proceso enseñanzaaprendizaje. Mucho de este material es accesible a través de diferentes repositorios de objetos de aprendizaje, donde cada objeto tiene metadatos descriptivos. Estos metadatos permiten recuperar aquellos objetos que satisfagan no sólo el tema de la consulta, sino también el perfil de usuario, teniendo en cuenta sus características y preferencias. En este trabajo se presenta una propuesta de un sistema recomendador de objetos de aprendizaje que ayuda a un usuario a encontrar

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Apresenta a proposta de um modelo de mapa do conhecimento, como ferramenta informacional em gestão de competências, aplicado à Câmara Legislativa do Distrito Federal - CLDF, para auxiliar no processo de governança legislativa. São abordados e discutidos os conceitos de administração pública gerencial; competência; competência nas organizações e alguns modelos, métodos e técnicas de gestão de competências. Apresenta, ainda, o mapeamento das áreas de competência existentes na CLDF, e a modelagem e classificação das competências por áreas. O modelo proposto envolveu a construção de um modelo de dados; de uma taxonomia institucional; de uma arquitetura da informação, com concepção do padrão institucional de metadados, do repositório da taxonomia e dos metadados; e a definição das unidades organizacionais responsáveis pelo gerenciamento do conteúdo e da operacionalização do sistema, com suas atribuições e responsabilidades. Por fim, recomenda a aplicação do modelo e a ampliação do estudo em instituições públicas e, particularmente, nas instituições do poder legislativo municipal, estadual e federal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Presentado en: IX Congreso Internacional de Rehabilitación del Patrimonio Arquitectónico y Edificación (Sevilla, España, 9-12 julio 2008)