347 resultados para Debugging in computer science.
Resumo:
The Link the Wiki track at INEX 2008 offered two tasks, file-to-file link discovery and anchor-to-BEP link discovery. In the former 6600 topics were used and in the latter 50 were used. Manual assessment of the anchor-to-BEP runs was performed using a tool developed for the purpose. Runs were evaluated using standard precision & recall measures such as MAP and precision / recall graphs. 10 groups participated and the approaches they took are discussed. Final evaluation results for all runs are presented.
Resumo:
We derive an explicit method of computing the composition step in Cantor’s algorithm for group operations on Jacobians of hyperelliptic curves. Our technique is inspired by the geometric description of the group law and applies to hyperelliptic curves of arbitrary genus. While Cantor’s general composition involves arithmetic in the polynomial ring F_q[x], the algorithm we propose solves a linear system over the base field which can be written down directly from the Mumford coordinates of the group elements. We apply this method to give more efficient formulas for group operations in both affine and projective coordinates for cryptographic systems based on Jacobians of genus 2 hyperelliptic curves in general form.
Resumo:
This paper presents an extended granule mining based methodology, to effectively describe the relationships between granules not only by traditional support and confidence, but by diversity and condition diversity as well. Diversity measures how diverse of a granule associated with the other granules, it provides a kind of novel knowledge in databases. We also provide an algorithm to implement the proposed methodology. The experiments conducted to characterize a real network traffic data collection show that the proposed concepts and algorithm are promising.
Resumo:
Most existing requirements engineering approaches focus on the modelling and specification of the IT artefacts ignoring the environment where the application is deployed. Although some requirements engineering approaches consider the stakeholder’s goals, they still focus on the IT artefacts’ specification. However, IT artefacts are embedded in a dynamic organisational environment and their design and specification cannot be separated from the environment’s constant evolution. Therefore, during the initial stages of a requirements engineering process it is advantageous to consider the integration of IT design with organisational design. We proposed the ADMITO (Analysis, Design and Management of IT and Organisations) approach to represent the dynamic relations between social and material entities, where the latter are divided into technological and organisational entities. In this paper we show how by using ADMITO in a concrete case, the Queensland Health Payroll (QHP) case, it is possible to have an integrated representation of IT and organisational design supporting organisational change and IT requirements specification.
Resumo:
Fusion techniques have received considerable attention for achieving lower error rates with biometrics. A fused classifier architecture based on sequential integration of multi-instance and multi-sample fusion schemes allows controlled trade-off between false alarms and false rejects. Expressions for each type of error for the fused system have previously been derived for the case of statistically independent classifier decisions. It is shown in this paper that the performance of this architecture can be improved by modelling the correlation between classifier decisions. Correlation modelling also enables better tuning of fusion model parameters, ‘N’, the number of classifiers and ‘M’, the number of attempts/samples, and facilitates the determination of error bounds for false rejects and false accepts for each specific user. Error trade-off performance of the architecture is evaluated using HMM based speaker verification on utterances of individual digits. Results show that performance is improved for the case of favourable correlated decisions. The architecture investigated here is directly applicable to speaker verification from spoken digit strings such as credit card numbers in telephone or voice over internet protocol based applications. It is also applicable to other biometric modalities such as finger prints and handwriting samples.
Resumo:
The development of text classification techniques has been largely promoted in the past decade due to the increasing availability and widespread use of digital documents. Usually, the performance of text classification relies on the quality of categories and the accuracy of classifiers learned from samples. When training samples are unavailable or categories are unqualified, text classification performance would be degraded. In this paper, we propose an unsupervised multi-label text classification method to classify documents using a large set of categories stored in a world ontology. The approach has been promisingly evaluated by compared with typical text classification methods, using a real-world document collection and based on the ground truth encoded by human experts.
Resumo:
Programming is a subject that many beginning students find difficult. This paper describes a knowledge base designed for the purpose of analyzing programs written in the PHP web development language. The aim is to use this knowledge base in an Intelligent Tutoring System that will provide effective feedback to students. The main focus of this research is that a programming exercise can have many correct solutions. This paper presents an overview of how the proposed knowledge base can be utilized to accept different solutions to a given exercise
Resumo:
Programming is a subject that many beginning students students find difficult. This paper descibes a knowledge base designed for the purpose of analyzing programs written in the PHP web development language. The aim is to use this knowledge base in an Intelligent Tutoring System. The main emphasis is on accepting alternative solutions to a given problem.
Resumo:
Most approaches to business process compliance are restricted to the analysis of the structure of processes. It has been argued that full regulatory compliance requires information on not only the structure of processes but also on what the tasks in a process do. To this end Governatori and Sadiq[2007] proposed to extend business processes with semantic annotations. We propose a methodology to automatically extract one kind of such annotations; in particular the annotations related to the data schema and templates linked to the various tasks in a business process.
Resumo:
The quality of discovered features in relevance feedback (RF) is the key issue for effective search query. Most existing feedback methods do not carefully address the issue of selecting features for noise reduction. As a result, extracted noisy features can easily contribute to undesirable effectiveness. In this paper, we propose a novel feature extraction method for query formulation. This method first extract term association patterns in RF as knowledge for feature extraction. Negative RF is then used to improve the quality of the discovered knowledge. A novel information filtering (IF) model is developed to evaluate the proposed method. The experimental results conducted on Reuters Corpus Volume 1 and TREC topics confirm that the proposed model achieved encouraging performance compared to state-of-the-art IF models.
Resumo:
There are many use cases in business process management that require the comparison of behavioral models. For instance, verifying equivalence is the basis for assessing whether a technical workflow correctly implements a business process, or whether a process realization conforms to a reference process. This paper proposes an equivalence relation for models that describe behaviors based on the concurrency semantics of net theory and for which an alignment relation has been defined. This equivalence, called isotactics, preserves the level of concurrency of aligned operations. Furthermore, we elaborate on the conditions under which an alignment relation can be classified as an abstraction. Finally, we show that alignment relations induced by structural refinements of behavioral models are indeed behavioral abstractions.
Resumo:
Timed-release cryptography addresses the problem of “sending messages into the future”: information is encrypted so that it can only be decrypted after a certain amount of time, either (a) with the help of a trusted third party time server, or (b) after a party performs the required number of sequential operations. We generalise the latter case to what we call effort-release public key encryption (ER-PKE), where only the party holding the private key corresponding to the public key can decrypt, and only after performing a certain amount of computation which may or may not be parallelisable. Effort-release PKE generalises both the sequential-operation-based timed-release encryption of Rivest, Shamir, and Wagner, and also the encapsulated key escrow techniques of Bellare and Goldwasser. We give a generic construction for ER-PKE based on the use of moderately hard computational problems called puzzles. Our approach extends the KEM/DEM framework for public key encryption by introducing a difficulty notion for KEMs which results in effort-release PKE. When the puzzle used in our generic construction is non-parallelisable, we recover timed-release cryptography, with the addition that only the designated receiver (in the public key setting) can decrypt.
Resumo:
We introduce the concept of Revocable Predicate Encryption (RPE), which extends current predicate encryption setting with revocation support: private keys can be used to decrypt an RPE ciphertext only if they match the decryption policy (defined via attributes encoded into the ciphertext and predicates associated with private keys) and were not revoked by the time the ciphertext was created. We formalize the notion of attribute hiding in the presence of revocation and propose an RPE scheme, called AH-RPE, which achieves attribute-hiding under the Decision Linear assumption in the standard model. We then present a stronger privacy notion, termed full hiding, which further cares about privacy of revoked users. We propose another RPE scheme, called FH-RPE, that adopts the Subset Cover Framework and offers full hiding under the Decision Linear assumption in the standard model. The scheme offers very flexible privacy-preserving access control to encrypted data and can be used in sender-local revocation scenarios.