344 resultados para Computer System Management
Resumo:
This chapter examines the ways in which notions of ‘a good citizen’ and ‘civic virtue’ have been conceptualized in the new Civics and Citizenship Curriculum for students in Years 3 – 10 in Australia. It argues that whilst Civics and Citizenship Education (CCE) has, over time and in various ways, been recognized as a significant aspect of Australian education, only recently has attention been given to the relational and multidimensional conceptions of citizenship. Considerations of ‘morality’, ‘a good citizen’ and ‘civic virtue’ offer possibilities to engage with multidimensional notions of citizenship, which acknowledge that citizenship perspectives can be affected by personal, social, spatial and temporary situations (Cogan & Derricott, 2000). In the current statement on national goals for schooling in Australia, which informed the development of CCE, the Melbourne Declaration (MCEETYA, 2008) called for young Australians to be educated to “act with moral and ethical integrity” and be “committed to national values of democracy, equity and justice, and participate in Australia’s civic life” (MCEETYA, 2008, pp. 8–9). The chapter claims that this maximal emphasis (McLaughlin, 1992), based on active, values based and interpretive approaches to democratic citizenship which encourage debate and participation in civil society, was evident in the new Civics and Citizenship Curriculum. However, it contends that the recommendations of the recent Review of the Australian Curriculum: Final report (Australian Government, 2014a & b), will now limit CCE’s potential to deliver the sort of active and informed citizenship heralded by the Melbourne Declaration. This is because the Review advocates for a content-focused minimal (McLaughlin, 1992) emphasis on civic knowledge, with diminished attention to citizenship participation and processes. In doing so, the Review foregrounds conceptions of the ‘good citizen’ in more limited terms of responsibility, obligations and compliance with the status quo.
Resumo:
Business Process Management (BPM) as a research field integrates different perspectives from the disciplines computer science, management science and information systems research. Its evolution has by been shaped by the corresponding conferences series, the International Conference on Business Process Management (BPM conference). As much as in other academic discipline, there is an ongoing debate that discusses the identity, the quality and maturity of the BPM field. In this paper, we review and summarize the major findings a larger study that will be published in the Business & Information Systems Engineering journal in 2016. In the study, we investigate the identity and progress of the BPM conference research community through an analysis of the BPM conference proceedings. Based on our findings from this analysis, we formulate recommendations to further develop the conference community in terms of methodological advance, quality, impact and progression.
Resumo:
A Delay Tolerant Network (DTN) is a dynamic, fragmented, and ephemeral network formed by a large number of highly mobile nodes. DTNs are ephemeral networks with highly mobile autonomous nodes. This requires distributed and self-organised approaches to trust management. Revocation and replacement of security credentials under adversarial influence by preserving the trust on the entity is still an open problem. Existing methods are mostly limited to detection and removal of malicious nodes. This paper makes use of the mobility property to provide a distributed, self-organising, and scalable revocation and replacement scheme. The proposed scheme effectively utilises the Leverage of Common Friends (LCF) trust system concepts to revoke compromised security credentials, replace them with new ones, whilst preserving the trust on them. The level of achieved entity confidence is thereby preserved. Security and performance of the proposed scheme is evaluated using an experimental data set in comparison with other schemes based around the LCF concept. Our extensive experimental results show that the proposed scheme distributes replacement credentials up to 35% faster and spreads spoofed credentials of strong collaborating adversaries up to 50% slower without causing any significant increase on the communication and storage overheads, when compared to other LCF based schemes.
Resumo:
We treat the security of group key exchange (GKE) in the universal composability (UC) framework. Analyzing GKE protocols in the UC framework naturally addresses attacks by malicious insiders. We define an ideal functionality for GKE that captures contributiveness in addition to other desired security goals. We show that an efficient two-round protocol securely realizes the proposed functionality in the random oracle model. As a result, we obtain the most efficient UC-secure contributory GKE protocol known.
Resumo:
A key exchange protocol allows a set of parties to agree upon a secret session key over a public network. Two-party key exchange (2PKE) protocols have been rigorously analyzed under various models considering different adversarial actions. However, the analysis of group key exchange (GKE) protocols has not been as extensive as that of 2PKE protocols. Particularly, the security attribute of key compromise impersonation (KCI) resilience has so far been ignored for the case of GKE protocols. We first model the security of GKE protocols addressing KCI attacks by both outsider and insider adversaries. We then show that a few existing protocols are not secure even against outsider KCI attacks. The attacks on these protocols demonstrate the necessity of considering KCI resilience for GKE protocols. Finally, we give a new proof of security for an existing GKE protocol under the revised model assuming random oracles.
Resumo:
The security of strong designated verifier (SDV) signature schemes has thus far been analyzed only in a two-user setting. We observe that security in a two-user setting does not necessarily imply the same in a multi-user setting for SDV signatures. Moreover, we show that existing security notions do not adequately model the security of SDV signatures even in a two-user setting. We then propose revised notions of security in a multi-user setting and show that no existing scheme satisfies these notions. A new SDV signature scheme is then presented and proven secure under the revised notions in the standard model. For the purpose of constructing the SDV signature scheme, we propose a one-pass key establishment protocol in the standard model, which is of independent interest in itself.
Resumo:
Providing support for reversible transformations as a basis for round-trip engineering is a significant challenge in model transformation research. While there are a number of current approaches, they require the underlying transformation to exhibit an injective behaviour when reversing changes. This however, does not serve all practical transformations well. In this paper, we present a novel approach to round-trip engineering that does not place restrictions on the nature of the underlying transformation. Based on abductive logic programming, it allows us to compute a set of legitimate source changes that equate to a given change to the target model. Encouraging results are derived from an initial prototype that supports most concepts of the Tefkat transformation language
Resumo:
Digital forensics relates to the investigation of a crime or other suspect behaviour using digital evidence. Previous work has dealt with the forensic reconstruction of computer-based activity on single hosts, but with the additional complexity involved with a distributed environment, a Web services-centric approach is required. A framework for this type of forensic examination needs to allow for the reconstruction of transactions spanning multiple hosts, platforms and applications. A tool implementing such an approach could be used by an investigator to identify scenarios of Web services being misused, exploited, or otherwise compromised. This information could be used to redesign Web services in order to mitigate identified risks. This paper explores the requirements of a framework for performing effective forensic examinations in a Web services environment. This framework will be necessary in order to develop forensic tools and techniques for use in service oriented architectures.
Resumo:
Security-critical communications devices must be evaluated to the highest possible standards before they can be deployed. This process includes tracing potential information flow through the device's electronic circuitry, for each of the device's operating modes. Increasingly, however, security functionality is being entrusted to embedded software running on microprocessors within such devices, so new strategies are needed for integrating information flow analyses of embedded program code with hardware analyses. Here we show how standard compiler principles can augment high-integrity security evaluations to allow seamless tracing of information flow through both the hardware and software of embedded systems. This is done by unifying input/output statements in embedded program execution paths with the hardware pins they access, and by associating significant software states with corresponding operating modes of the surrounding electronic circuitry.
Resumo:
This appendix describes the Order Fulfillment process followed by a fictitious company named Genko Oil. The process is freely inspired by the VICS (Voluntary Inter-industry Commerce Solutions) reference model1 and provides a demonstration of YAWL’s capabilities in modelling complex control-flow, data and resourcing requirements.
Resumo:
This chapter describes how the YAWL meta-model was extended to support the definition of variation points. These variation points can be used to describe different variants of a YAWL process model in a unified, configurable model. The model can then be configured to suit the needs of specific settings, e.g. for a new organization of project.
Resumo:
Monitoring unused or dark IP addresses offers opportunities to extract useful information about both on-going and new attack patterns. In recent years, different techniques have been used to analyze such traffic including sequential analysis where a change in traffic behavior, for example change in mean, is used as an indication of malicious activity. Change points themselves say little about detected change; further data processing is necessary for the extraction of useful information and to identify the exact cause of the detected change which is limited due to the size and nature of observed traffic. In this paper, we address the problem of analyzing a large volume of such traffic by correlating change points identified in different traffic parameters. The significance of the proposed technique is two-fold. Firstly, automatic extraction of information related to change points by correlating change points detected across multiple traffic parameters. Secondly, validation of the detected change point by the simultaneous presence of another change point in a different parameter. Using a real network trace collected from unused IP addresses, we demonstrate that the proposed technique enables us to not only validate the change point but also extract useful information about the causes of change points.
Resumo:
Two-stroke outboard boat engines using total loss lubrication deposit a significant proportion of their lubricant and fuel directly into the water. The purpose of this work is to document the velocity and concentration field characteristics of a submerged swirling water jet emanating from a propeller in order to provide information on its fundamental characteristics. Measurements of the velocity and concentration field were performed in a turbulent jet generated by a model boat propeller (0.02 m diameter) operating at 1500 rpm and 3000 rpm. The measurements were carried out in the Zone of Established Flow up to 50 propeller diameters downstream of the propeller. Both the mean axial velocity profile and the mean concentration profile showed self-similarity. Further, the stand deviation growth curve was linear. The effects of propeller speed and dye release location were also investigated.
Resumo:
Tzeng et al. proposed a new threshold multi-proxy multi-signature scheme with threshold verification. In their scheme, a subset of original signers authenticates a designated proxy group to sign on behalf of the original group. A message m has to be signed by a subset of proxy signers who can represent the proxy group. Then, the proxy signature is sent to the verifier group. A subset of verifiers in the verifier group can also represent the group to authenticate the proxy signature. Subsequently, there are two improved schemes to eliminate the security leak of Tzeng et al.’s scheme. In this paper, we have pointed out the security leakage of the three schemes and further proposed a novel threshold multi-proxy multi-signature scheme with threshold verification.
Resumo:
A strong designated verifier signature scheme makes it possible for a signer to convince a designated verifier that she has signed a message in such a way that the designated verifier cannot transfer the signature to a third party, and no third party can even verify the validity of a designated verifier signature. We show that anyone who intercepts one signature can verify subsequent signatures in Zhang-Mao ID-based designated verifier signature scheme and Lal-Verma ID-based designated verifier proxy signature scheme. We propose a new and efficient ID-based designated verifier signature scheme that is strong and unforgeable. As a direct corollary, we also get a new efficient ID-based designated verifier proxy signature scheme.