882 resultados para Queensland Criminal Code
Resumo:
This thesis explores how governance networks prioritise and engage with their stakeholders, by studying three exemplars of “Regional Road Group” governance networks in Queensland, Australia. In the context of managing regionally significant road works programs, stakeholder prioritisation is a complex activity which is unlikely to influence interactions with stakeholders outside of the network. However, stakeholder priority is more likely to influence stakeholder interactions within the networks themselves. Both stakeholder prioritisation and engagement are strongly influenced by the way that the networks are managed, and in particular network operating rules and continuing access to resources.
Resumo:
Purpose - The paper aims to improve consumer awareness of the complexities of community living. It does this by clarifying how living in a managed community is different from a ‘traditional’ neighbourhood; and identifying matters that can become disputes Design/methodology/approach - The paper builds on research by other authors into strata scheme disputes by examining recent Queensland cases. Findings - Many disputes appear to result from a lack of understanding of the complexities of community living. Matters that should be able to be easily resolved are therefore escalated to formal disputes. Research limitations/implications - The paper considers law and cases from Queensland. The types of matters considered, however, are relevant for any managed community and therefore the research is relevant for all jurisdictions. The research will be of particular interest to jurisdictions looking to boost living density by increasing the development of managed communities. Practical implications - The research will assist in consumer transactions by providing guidance as to the matters to be considering prior to moving into a managed community. More informed decision making by prospective residents will lead to a decreased likelihood of disputes arising. Originality/value - The paper is an up-to-date consideration of the issues arising from community living. It highlights the benefits arising from increased consumer awareness of the complexities of community living and the potential for consumer education to reduce the number of disputes.
Resumo:
The coal industry in Queensland operates in a very complex regulatory environment with a matrix of Federal and State laws covering the environment, health and safety, taxation and royalties, tenure, and development approvals. The Queensland Government in 2012 recognised the validity of certain industry concerns and passed two Acts being the Environmental Protection (Greentape Reduction) Amendment Act 2012 (the Greentape Act) and the Mines Legislation (Streamlining) Amendment Act 2012 (the Streamlining Act). Other changes are foreshadowed in relation to overlapping tenure and in the development of common resources legislation. Accordingly there is a great level of activity and change that has occurred or which is on the horizon. This article focuses upon these regulatory changes and foreshadows other areas requiring consideration. It commences with a consideration of the changes that have already occurred, examines those regulatory amendments that are on the drawing board and concludes with suggestions as to further interventions and amendments that have the potential to enhance the efficiency and effectiveness of the legislative framework in which coal mining is conducted.
Resumo:
This paper examines patterns of political activity and campaigning on Twitter in the context of the 2012 election in the Australian state of Queensland. Social media have been a visible component of political campaigning in Australia at least since the 2007 federal election, with Twitter, in particular, rising to greater prominence in the 2010 federal election. At state level, however, they have remained comparatively less important thus far. In this paper, we track uses of Twitter in the Queensland campaign from its unofficial start in February through to the election day of 24 March 2012. We both examine the overall patterns of activity in the hash tag #qldvotes, and track specific interactions between politicians and other users by following some 80 Twitter accounts of sitting members of parliament and alternative candidates. Such analysis provides new insights into the different approaches to social media campaigning which were embraced by specific candidates and party organisations, as well as an indication of the relative importance of social media activities, at present, for state-level election campaigns.
Resumo:
Using historical narrative and extensive archival research, this thesis portrays the story of the twentieth century Queensland Rural Schools. The initiative started at Nambour Primary School in 1917, and extended over the next four decades to encompass thirty primary schools that functioned as centralized institutions training children in agricultural science, domestic science, and manual trade training. The Rural Schools formed the foundation of a systemised approach to agricultural education intended to facilitate the State’s closer settlement ideology. The purpose of the Rural Schools was to mitigate urbanisation, circumvent foreign incursion and increase Queensland’s productivity by turning boys into farmers, or the tradesmen required to support them, and girls into the homemakers that these farmers needed as wives and mothers for the next generation. Effectively Queensland took rural boys and girls and created a new yeomanry to aid the State’s development.
Resumo:
Many software applications extend their functionality by dynamically loading executable components into their allocated address space. Such components, exemplified by browser plugins and other software add-ons, not only enable reusability, but also promote programming simplicity, as they reside in the same address space as their host application, supporting easy sharing of complex data structures and pointers. However, such components are also often of unknown provenance and quality and may be riddled with accidental bugs or, in some cases, deliberately malicious code. Statistics show that such component failures account for a high percentage of software crashes and vulnerabilities. Enabling isolation of such fine-grained components is therefore necessary to increase the stability, security and resilience of computer programs. This thesis addresses this issue by showing how host applications can create isolation domains for individual components, while preserving the benefits of a single address space, via a new architecture for software isolation called LibVM. Towards this end, we define a specification which outlines the functional requirements for LibVM, identify the conditions under which these functional requirements can be met, define an abstract Application Programming Interface (API) that encompasses the general problem of isolating shared libraries, thus separating policy from mechanism, and prove its practicality with two concrete implementations based on hardware virtualization and system call interpositioning, respectively. The results demonstrate that hardware isolation minimises the difficulties encountered with software based approaches, while also reducing the size of the trusted computing base, thus increasing confidence in the solution’s correctness. This thesis concludes that, not only is it feasible to create such isolation domains for individual components, but that it should also be a fundamental operating system supported abstraction, which would lead to more stable and secure applications.
Resumo:
Authenticated Encryption (AE) is the cryptographic process of providing simultaneous confidentiality and integrity protection to messages. This approach is more efficient than applying a two-step process of providing confidentiality for a message by encrypting the message, and in a separate pass providing integrity protection by generating a Message Authentication Code (MAC). AE using symmetric ciphers can be provided by either stream ciphers with built in authentication mechanisms or block ciphers using appropriate modes of operation. However, stream ciphers have the potential for higher performance and smaller footprint in hardware and/or software than block ciphers. This property makes stream ciphers suitable for resource constrained environments, where storage and computational power are limited. There have been several recent stream cipher proposals that claim to provide AE. These ciphers can be analysed using existing techniques that consider confidentiality or integrity separately; however currently there is no existing framework for the analysis of AE stream ciphers that analyses these two properties simultaneously. This thesis introduces a novel framework for the analysis of AE using stream cipher algorithms. This thesis analyzes the mechanisms for providing confidentiality and for providing integrity in AE algorithms using stream ciphers. There is a greater emphasis on the analysis of the integrity mechanisms, as there is little in the public literature on this, in the context of authenticated encryption. The thesis has four main contributions as follows. The first contribution is the design of a framework that can be used to classify AE stream ciphers based on three characteristics. The first classification applies Bellare and Namprempre's work on the the order in which encryption and authentication processes take place. The second classification is based on the method used for accumulating the input message (either directly or indirectly) into the into the internal states of the cipher to generate a MAC. The third classification is based on whether the sequence that is used to provide encryption and authentication is generated using a single key and initial vector, or two keys and two initial vectors. The second contribution is the application of an existing algebraic method to analyse the confidentiality algorithms of two AE stream ciphers; namely SSS and ZUC. The algebraic method is based on considering the nonlinear filter (NLF) of these ciphers as a combiner with memory. This method enables us to construct equations for the NLF that relate the (inputs, outputs and memory of the combiner) to the output keystream. We show that both of these ciphers are secure from this type of algebraic attack. We conclude that using a keydependent SBox in the NLF twice, and using two different SBoxes in the NLF of ZUC, prevents this type of algebraic attack. The third contribution is a new general matrix based model for MAC generation where the input message is injected directly into the internal state. This model describes the accumulation process when the input message is injected directly into the internal state of a nonlinear filter generator. We show that three recently proposed AE stream ciphers can be considered as instances of this model; namely SSS, NLSv2 and SOBER-128. Our model is more general than a previous investigations into direct injection. Possible forgery attacks against this model are investigated. It is shown that using a nonlinear filter in the accumulation process of the input message when either the input message or the initial states of the register is unknown prevents forgery attacks based on collisions. The last contribution is a new general matrix based model for MAC generation where the input message is injected indirectly into the internal state. This model uses the input message as a controller to accumulate a keystream sequence into an accumulation register. We show that three current AE stream ciphers can be considered as instances of this model; namely ZUC, Grain-128a and Sfinks. We establish the conditions under which the model is susceptible to forgery and side-channel attacks.
Resumo:
Current literature warns organisations about a global ageing phenomenon. Workplace ageing is causing a diminishing work pool which has consequences for a sustainable workforce in the future. This phenomenon continues to impact on local government councils in Australia. Australia has one of the world’s most rapidly ageing populations, and there is evidence that Australian local government councils are already resulting in an unsustainable workforce. Consequently, this research program investigated the role of older workers in the Queensland local government workplace in enabling them to extend their working lives towards transitional employment and a sustainable workforce in the future. Transitional Employment is intended as a strategy for enabling individuals to have greater control over their employment options and their employability during the period leading to their final exit from the workforce. There was no evidence of corporate support for older workers in Queensland local government councils other than tokenistic government campaigns encouraging organisations to "better value their older workers". (Queensland Government, 2007d, p.6). TE is investigated as a possible intervention for older workers in the future. The international and national literature review reflected a range of matters impacting on current older workers in the workforce and barriers preventing them from accessing services towards extending their employment beyond the traditional retirement age (60 years) as defined by the Australian Government; an age when individuals can access their superannuation. Learning and development services were identified as one of those barriers. There was little evidence of investment in or consistent approaches to supporting older workers by organisations. Learning and development services appeared at best to be ad hoc, reactive to corporate productivity and outputs with little recognition of the ageing phenomenon (OECD, 2006, p.23) and looming skills and labour shortages (ALGA, 2006, p. 19). Themes from the literature review led to the establishment of three key research questions: 1. What are the current local government workforce issues impacting on skills and labour retention? 2. What are perceptions about the current workplace environment? And, 3. What are the expectations about learning and development towards extending employability of older workers within the local government sector? The research questions were explored by utilising three qualitative empirical studies, using some numerical data for reporting and comparative analysis. Empirical Study One investigated common themes for accessing transitional employment and comprised two phases. A literature review and Study One data analysis enabled the construction of an initial Transitional Employment Model which includes most frequent themes. Empirical Study Two comprised focus groups to further consider those themes. This led to identification of issues impacting the most on access to learning and development by older workers and towards a revised TEM. Findings presented majority support for transitional employment as a strategy for supporting older workers to work beyond their traditional retirement age. Those findings are presented as significant issues impacting on access to transitional employment within the final 3-dimensionsal TEM. The model is intended as a guide for responding to an ageing workforce by local government councils in the future. This study argued for increased and improved corporate support, particularly for learning and development services for older workers. Such support will enable older workers to maintain their employability and extend their working lives; a sustainable workforce in the future.
Resumo:
The current discourse surrounding victims of online fraud is heavily premised on an individual notion of greed. The strength of this discourse permeates the thinking of those who have not experienced this type of crime, as well as victims themselves. The current discourse also manifests itself in theories of victim precipitation, which again assigns the locus of blame to individuals for their actions in an offence. While these typologies and categorisations of victims have been critiqued as “victim blaming” in other fields, this has not occurred with regard to online fraud victims, where victim focused ideas of responsibility for the offence continue to dominate. This paper illustrates the nature and extent of the greed discourse and argues that it forms part of a wider construction of online fraud that sees responsibility for victimisation lie with the victims themselves and their actions. It argues that the current discourse does not take into account the level of deception and the targeting of vulnerability that is employed by the offender in perpetrating this type of crime. It concludes by advocating the need to further examine and challenge this discourse, especially with regard to its potential impact for victim’s access to support services and the wider criminal justice system.
Resumo:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.
Resumo:
The feral pig, Sus scrofa, is a widespread and abundant invasive species in Australia. Feral pigs pose a significant threat to the environment, agricultural industry, and human health, and in far north Queensland they endanger World Heritage values of the Wet Tropics. Historical records document the first introduction of domestic pigs into Australia via European settlers in 1788 and subsequent introductions from Asia from 1827 onwards. Since this time, domestic pigs have been accidentally and deliberately released into the wild and significant feral pig populations have become established, resulting in the declaration of this species as a class 2 pest in Queensland. The overall objective of this study was to assess the population genetic structure of feral pigs in far north Queensland, in particular to enable delineation of demographically independent management units. The identification of ecologically meaningful management units using molecular techniques can assist in targeting feral pig control to bring about effective long-term management. Molecular genetic analysis was undertaken on 434 feral pigs from 35 localities between Tully and Innisfail. Seven polymorphic and unlinked microsatellite loci were screened and fixation indices (FST and analogues) and Bayesian clustering methods were used to identify population structure and management units in the study area. Sequencing of the hyper-variable mitochondrial control region (D-loop) of 35 feral pigs was also examined to identify pig ancestry. Three management units were identified in the study at a scale of 25 to 35 km. Even with the strong pattern of genetic structure identified in the study area, some evidence of long distance dispersal and/or translocation was found as a small number of individuals exhibited ancestry from a management unit outside of which they were sampled. Overall, gene flow in the study area was found to be influenced by environmental features such as topography and land use, but no distinct or obvious natural or anthropogenic geographic barriers were identified. Furthermore, strong evidence was found for non-random mating between pigs of European and Asian breeds indicating that feral pig ancestry influences their population genetic structure. Phylogenetic analysis revealed two distinct mitochondrial DNA clades, representing Asian domestic pig breeds and European breeds. A significant finding was that pigs of Asian origin living in Innisfail and south Tully were not mating randomly with European breed pigs populating the nearby Mission Beach area. Feral pig control should be implemented in each of the management units identified in this study. The control should be coordinated across properties within each management unit to prevent re-colonisation from adjacent localities. The adjacent rainforest and National Park Estates, as well as the rainforest-crop boundary should be included in a simultaneous control operation for greater success.
Resumo:
Introduction: The accurate identification of tissue electron densities is of great importance for Monte Carlo (MC) dose calculations. When converting patient CT data into a voxelised format suitable for MC simulations, however, it is common to simplify the assignment of electron densities so that the complex tissues existing in the human body are categorized into a few basic types. This study examines the effects that the assignment of tissue types and the calculation of densities can have on the results of MC simulations, for the particular case of a Siemen’s Sensation 4 CT scanner located in a radiotherapy centre where QA measurements are routinely made using 11 tissue types (plus air). Methods: DOSXYZnrc phantoms are generated from CT data, using the CTCREATE user code, with the relationship between Hounsfield units (HU) and density determined via linear interpolation between a series of specified points on the ‘CT-density ramp’ (see Figure 1(a)). Tissue types are assigned according to HU ranges. Each voxel in the DOSXYZnrc phantom therefore has an electron density (electrons/cm3) defined by the product of the mass density (from the HU conversion) and the intrinsic electron density (electrons /gram) (from the material assignment), in that voxel. In this study, we consider the problems of density conversion and material identification separately: the CT-density ramp is simplified by decreasing the number of points which define it from 12 down to 8, 3 and 2; and the material-type-assignment is varied by defining the materials which comprise our test phantom (a Supertech head) as two tissues and bone, two plastics and bone, water only and (as an extreme case) lead only. The effect of these parameters on radiological thickness maps derived from simulated portal images is investigated. Results & Discussion: Increasing the degree of simplification of the CT-density ramp results in an increasing effect on the resulting radiological thickness calculated for the Supertech head phantom. For instance, defining the CT-density ramp using 8 points, instead of 12, results in a maximum radiological thickness change of 0.2 cm, whereas defining the CT-density ramp using only 2 points results in a maximum radiological thickness change of 11.2 cm. Changing the definition of the materials comprising the phantom between water and plastic and tissue results in millimetre-scale changes to the resulting radiological thickness. When the entire phantom is defined as lead, this alteration changes the calculated radiological thickness by a maximum of 9.7 cm. Evidently, the simplification of the CT-density ramp has a greater effect on the resulting radiological thickness map than does the alteration of the assignment of tissue types. Conclusions: It is possible to alter the definitions of the tissue types comprising the phantom (or patient) without substantially altering the results of simulated portal images. However, these images are very sensitive to the accurate identification of the HU-density relationship. When converting data from a patient’s CT into a MC simulation phantom, therefore, all possible care should be taken to accurately reproduce the conversion between HU and mass density, for the specific CT scanner used. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital (RBWH), Brisbane, Australia. The authors are grateful to the staff of the RBWH, especially Darren Cassidy, for assistance in obtaining the phantom CT data used in this study. The authors also wish to thank Cathy Hargrave, of QUT, for assistance in formatting the CT data, using the Pinnacle TPS. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.
Resumo:
Introduction: The use of amorphous-silicon electronic portal imaging devices (a-Si EPIDs) for dosimetry is complicated by the effects of scattered radiation. In photon radiotherapy, primary signal at the detector can be accompanied by photons scattered from linear accelerator components, detector materials, intervening air, treatment room surfaces (floor, walls, etc) and from the patient/phantom being irradiated. Consequently, EPID measurements which presume to take scatter into account are highly sensitive to the identification of these contributions. One example of this susceptibility is the process of calibrating an EPID for use as a gauge of (radiological) thickness, where specific allowance must be made for the effect of phantom-scatter on the intensity of radiation measured through different thicknesses of phantom. This is usually done via a theoretical calculation which assumes that phantom scatter is linearly related to thickness and field-size. We have, however, undertaken a more detailed study of the scattering effects of fields of different dimensions when applied to phantoms of various thicknesses in order to derive scattered-primary ratios (SPRs) directly from simulation results. This allows us to make a more-accurate calibration of the EPID, and to qualify the appositeness of the theoretical SPR calculations. Methods: This study uses a full MC model of the entire linac-phantom-detector system simulated using EGSnrc/BEAMnrc codes. The Elekta linac and EPID are modelled according to specifications from the manufacturer and the intervening phantoms are modelled as rectilinear blocks of water or plastic, with their densities set to a range of physically realistic and unrealistic values. Transmissions through these various phantoms are calculated using the dose detected in the model EPID and used in an evaluation of the field-size-dependence of SPR, in different media, applying a method suggested for experimental systems by Swindell and Evans [1]. These results are compared firstly with SPRs calculated using the theoretical, linear relationship between SPR and irradiated volume, and secondly with SPRs evaluated from our own experimental data. An alternate evaluation of the SPR in each simulated system is also made by modifying the BEAMnrc user code READPHSP, to identify and count those particles in a given plane of the system that have undergone a scattering event. In addition to these simulations, which are designed to closely replicate the experimental setup, we also used MC models to examine the effects of varying the setup in experimentally challenging ways (changing the size of the air gap between the phantom and the EPID, changing the longitudinal position of the EPID itself). Experimental measurements used in this study were made using an Elekta Precise linear accelerator, operating at 6MV, with an Elekta iView GT a-Si EPID. Results and Discussion: 1. Comparison with theory: With the Elekta iView EPID fixed at 160 cm from the photon source, the phantoms, when positioned isocentrically, are located 41 to 55 cm from the surface of the panel. At this geometry, a close but imperfect agreement (differing by up to 5%) can be identified between the results of the simulations and the theoretical calculations. However, this agreement can be totally disrupted by shifting the phantom out of the isocentric position. Evidently, the allowance made for source-phantom-detector geometry by the theoretical expression for SPR is inadequate to describe the effect that phantom proximity can have on measurements made using an (infamously low-energy sensitive) a-Si EPID. 2. Comparison with experiment: For various square field sizes and across the range of phantom thicknesses, there is good agreement between simulation data and experimental measurements of the transmissions and the derived values of the primary intensities. However, the values of SPR obtained through these simulations and measurements seem to be much more sensitive to slight differences between the simulated and real systems, leading to difficulties in producing a simulated system which adequately replicates the experimental data. (For instance, small changes to simulated phantom density make large differences to resulting SPR.) 3. Comparison with direct calculation: By developing a method for directly counting the number scattered particles reaching the detector after passing through the various isocentric phantom thicknesses, we show that the experimental method discussed above is providing a good measure of the actual degree of scattering produced by the phantom. This calculation also permits the analysis of the scattering sources/sinks within the linac and EPID, as well as the phantom and intervening air. Conclusions: This work challenges the assumption that scatter to and within an EPID can be accounted for using a simple, linear model. Simulations discussed here are intended to contribute to a fuller understanding of the contribution of scattered radiation to the EPID images that are used in dosimetry calculations. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital, Brisbane, Australia. The authors are also grateful to Elekta for the provision of manufacturing specifications which permitted the detailed simulation of their linear accelerators and amorphous-silicon electronic portal imaging devices. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.