302 resultados para ICGS (Electronic computer system)
Resumo:
Private title insurance has been the subject of much debate by law reform bodies and academics. This article adds a new dimension to the discussion by analysing its role against a recent scenario where a nun was betrayed by the actions of her brother, and compensation payable from the assurance fund, after much challenge by the registrar, amounted to in excess of $4 million.We ask whether the slow burning of title insurance into the psyche of Australian home purchasers will see state-based assurance fundings looking to minismise their role in the Torrens system. We also query how the rather more immediate electronic establishment of electronic conveyancing will alter the balance between the assurance fund, private title insurance and the increasing responsibilities on stakeholdes involved in conveyancing.
Resumo:
We describe an investigation into how Massey University’s Pollen Classifynder can accelerate the understanding of pollen and its role in nature. The Classifynder is an imaging microscopy system that can locate, image and classify slide based pollen samples. Given the laboriousness of purely manual image acquisition and identification it is vital to exploit assistive technologies like the Classifynder to enable acquisition and analysis of pollen samples. It is also vital that we understand the strengths and limitations of automated systems so that they can be used (and improved) to compliment the strengths and weaknesses of human analysts to the greatest extent possible. This article reviews some of our experiences with the Classifynder system and our exploration of alternative classifier models to enhance both accuracy and interpretability. Our experiments in the pollen analysis problem domain have been based on samples from the Australian National University’s pollen reference collection (2,890 grains, 15 species) and images bundled with the Classifynder system (400 grains, 4 species). These samples have been represented using the Classifynder image feature set.We additionally work through a real world case study where we assess the ability of the system to determine the pollen make-up of samples of New Zealand honey. In addition to the Classifynder’s native neural network classifier, we have evaluated linear discriminant, support vector machine, decision tree and random forest classifiers on these data with encouraging results. Our hope is that our findings will help enhance the performance of future releases of the Classifynder and other systems for accelerating the acquisition and analysis of pollen samples.
Resumo:
This chapter examines the ways in which notions of ‘a good citizen’ and ‘civic virtue’ have been conceptualized in the new Civics and Citizenship Curriculum for students in Years 3 – 10 in Australia. It argues that whilst Civics and Citizenship Education (CCE) has, over time and in various ways, been recognized as a significant aspect of Australian education, only recently has attention been given to the relational and multidimensional conceptions of citizenship. Considerations of ‘morality’, ‘a good citizen’ and ‘civic virtue’ offer possibilities to engage with multidimensional notions of citizenship, which acknowledge that citizenship perspectives can be affected by personal, social, spatial and temporary situations (Cogan & Derricott, 2000). In the current statement on national goals for schooling in Australia, which informed the development of CCE, the Melbourne Declaration (MCEETYA, 2008) called for young Australians to be educated to “act with moral and ethical integrity” and be “committed to national values of democracy, equity and justice, and participate in Australia’s civic life” (MCEETYA, 2008, pp. 8–9). The chapter claims that this maximal emphasis (McLaughlin, 1992), based on active, values based and interpretive approaches to democratic citizenship which encourage debate and participation in civil society, was evident in the new Civics and Citizenship Curriculum. However, it contends that the recommendations of the recent Review of the Australian Curriculum: Final report (Australian Government, 2014a & b), will now limit CCE’s potential to deliver the sort of active and informed citizenship heralded by the Melbourne Declaration. This is because the Review advocates for a content-focused minimal (McLaughlin, 1992) emphasis on civic knowledge, with diminished attention to citizenship participation and processes. In doing so, the Review foregrounds conceptions of the ‘good citizen’ in more limited terms of responsibility, obligations and compliance with the status quo.
Resumo:
We treat the security of group key exchange (GKE) in the universal composability (UC) framework. Analyzing GKE protocols in the UC framework naturally addresses attacks by malicious insiders. We define an ideal functionality for GKE that captures contributiveness in addition to other desired security goals. We show that an efficient two-round protocol securely realizes the proposed functionality in the random oracle model. As a result, we obtain the most efficient UC-secure contributory GKE protocol known.
Resumo:
A key exchange protocol allows a set of parties to agree upon a secret session key over a public network. Two-party key exchange (2PKE) protocols have been rigorously analyzed under various models considering different adversarial actions. However, the analysis of group key exchange (GKE) protocols has not been as extensive as that of 2PKE protocols. Particularly, the security attribute of key compromise impersonation (KCI) resilience has so far been ignored for the case of GKE protocols. We first model the security of GKE protocols addressing KCI attacks by both outsider and insider adversaries. We then show that a few existing protocols are not secure even against outsider KCI attacks. The attacks on these protocols demonstrate the necessity of considering KCI resilience for GKE protocols. Finally, we give a new proof of security for an existing GKE protocol under the revised model assuming random oracles.
Resumo:
The security of strong designated verifier (SDV) signature schemes has thus far been analyzed only in a two-user setting. We observe that security in a two-user setting does not necessarily imply the same in a multi-user setting for SDV signatures. Moreover, we show that existing security notions do not adequately model the security of SDV signatures even in a two-user setting. We then propose revised notions of security in a multi-user setting and show that no existing scheme satisfies these notions. A new SDV signature scheme is then presented and proven secure under the revised notions in the standard model. For the purpose of constructing the SDV signature scheme, we propose a one-pass key establishment protocol in the standard model, which is of independent interest in itself.
Resumo:
Providing support for reversible transformations as a basis for round-trip engineering is a significant challenge in model transformation research. While there are a number of current approaches, they require the underlying transformation to exhibit an injective behaviour when reversing changes. This however, does not serve all practical transformations well. In this paper, we present a novel approach to round-trip engineering that does not place restrictions on the nature of the underlying transformation. Based on abductive logic programming, it allows us to compute a set of legitimate source changes that equate to a given change to the target model. Encouraging results are derived from an initial prototype that supports most concepts of the Tefkat transformation language
Resumo:
Digital forensics relates to the investigation of a crime or other suspect behaviour using digital evidence. Previous work has dealt with the forensic reconstruction of computer-based activity on single hosts, but with the additional complexity involved with a distributed environment, a Web services-centric approach is required. A framework for this type of forensic examination needs to allow for the reconstruction of transactions spanning multiple hosts, platforms and applications. A tool implementing such an approach could be used by an investigator to identify scenarios of Web services being misused, exploited, or otherwise compromised. This information could be used to redesign Web services in order to mitigate identified risks. This paper explores the requirements of a framework for performing effective forensic examinations in a Web services environment. This framework will be necessary in order to develop forensic tools and techniques for use in service oriented architectures.
Resumo:
The Queensland Department of Public Works (QDPW) and the Queensland Department of Main Roads (QDMR) have identified a need for industry e-contracting guidelines in the short to medium term. Each of these organisations conducts tenders and contracts for over $600 million annually. This report considers the security and legal issues relating to the shift from a paper-based tendering system to an electronic tendering system. The research objectives derived from the industry partners include: • a review of current standards and e-tendering systems; • a summary of legal requirements impacting upon e-tendering; • an analysis of the threats and requirements for any e-tendering system; • the identification of outstanding issues; • an evaluation of possible e-tendering architectures; • recommendations for e-tendering systems.
Resumo:
Forensic analysis requires the acquisition and management of many different types of evidence, including individual disk drives, RAID sets, network packets, memory images, and extracted files. Often the same evidence is reviewed by several different tools or examiners in different locations. We propose a backwards-compatible redesign of the Advanced Forensic Formatdan open, extensible file format for storing and sharing of evidence, arbitrary case related information and analysis results among different tools. The new specification, termed AFF4, is designed to be simple to implement, built upon the well supported ZIP file format specification. Furthermore, the AFF4 implementation has downward comparability with existing AFF files.
Resumo:
Current regulatory requirements on data privacy make it increasingly important for enterprises to be able to verify and audit their compliance with their privacy policies. Traditionally, a privacy policy is written in a natural language. Such policies inherit the potential ambiguity, inconsistency and mis-interpretation of natural text. Hence, formal languages are emerging to allow a precise specification of enforceable privacy policies that can be verified. The EP3P language is one such formal language. An EP3P privacy policy of an enterprise consists of many rules. Given the semantics of the language, there may exist some rules in the ruleset which can never be used, these rules are referred to as redundant rules. Redundancies adversely affect privacy policies in several ways. Firstly, redundant rules reduce the efficiency of operations on privacy policies. Secondly, they may misdirect the policy auditor when determining the outcome of a policy. Therefore, in order to address these deficiencies it is important to identify and resolve redundancies. This thesis introduces the concept of minimal privacy policy - a policy that is free of redundancy. The essential component for maintaining the minimality of privacy policies is to determine the effects of the rules on each other. Hence, redundancy detection and resolution frameworks are proposed. Pair-wise redundancy detection is the central concept in these frameworks and it suggests a pair-wise comparison of the rules in order to detect redundancies. In addition, the thesis introduces a policy management tool that assists policy auditors in performing several operations on an EP3P privacy policy while maintaining its minimality. Formal results comparing alternative notions of redundancy, and how this would affect the tool, are also presented.
Resumo:
Monitoring unused or dark IP addresses offers opportunities to extract useful information about both on-going and new attack patterns. In recent years, different techniques have been used to analyze such traffic including sequential analysis where a change in traffic behavior, for example change in mean, is used as an indication of malicious activity. Change points themselves say little about detected change; further data processing is necessary for the extraction of useful information and to identify the exact cause of the detected change which is limited due to the size and nature of observed traffic. In this paper, we address the problem of analyzing a large volume of such traffic by correlating change points identified in different traffic parameters. The significance of the proposed technique is two-fold. Firstly, automatic extraction of information related to change points by correlating change points detected across multiple traffic parameters. Secondly, validation of the detected change point by the simultaneous presence of another change point in a different parameter. Using a real network trace collected from unused IP addresses, we demonstrate that the proposed technique enables us to not only validate the change point but also extract useful information about the causes of change points.
Resumo:
SITDRM 1 is a privacy protection system that protects private data through the enforcement of MPEG REL licenses provided by consumers. Direct issuing of licenses by consumers has several usability problems that will be mentioned in this paper. Further, we will describe how SITDRM incorporates P3P language to provide a consumer-centered privacy protection system.
Resumo:
Tzeng et al. proposed a new threshold multi-proxy multi-signature scheme with threshold verification. In their scheme, a subset of original signers authenticates a designated proxy group to sign on behalf of the original group. A message m has to be signed by a subset of proxy signers who can represent the proxy group. Then, the proxy signature is sent to the verifier group. A subset of verifiers in the verifier group can also represent the group to authenticate the proxy signature. Subsequently, there are two improved schemes to eliminate the security leak of Tzeng et al.’s scheme. In this paper, we have pointed out the security leakage of the three schemes and further proposed a novel threshold multi-proxy multi-signature scheme with threshold verification.
Resumo:
A strong designated verifier signature scheme makes it possible for a signer to convince a designated verifier that she has signed a message in such a way that the designated verifier cannot transfer the signature to a third party, and no third party can even verify the validity of a designated verifier signature. We show that anyone who intercepts one signature can verify subsequent signatures in Zhang-Mao ID-based designated verifier signature scheme and Lal-Verma ID-based designated verifier proxy signature scheme. We propose a new and efficient ID-based designated verifier signature scheme that is strong and unforgeable. As a direct corollary, we also get a new efficient ID-based designated verifier proxy signature scheme.