84 resultados para Irrigation schemes
Resumo:
Wound debridement refers to the removal of necrotic, devitalized, or contaminated tissue and/or foreign material to promote wound healing. Surgical debridement uses sharp instruments to cut dead tissue from a wound and it is the quickest and most efficient method of debridement. A wound debridement simulator [1,2] can ensure that a medical trainee is competent prior to performing a procedure on a genuine patient. Irrigation is performed at different stages of debridement in order to remove debris and reduce the bacteria count through rinsing the wound. This paper presents a novel approach for realistic irrigation visualization based on texture representations of debris. This approach applies image processing techniques to a series of images, which model the cleanliness of the wound. The active texture is generated and updated dynamically based on the irrigation state, location, and range. Presented results demonstrate that texture mapping and image processing techniques can provide effective and efficient solutions for irrigation visualization in the wound debridement simulator.
Resumo:
Irrigation is known to stimulate soil microbial carbon and nitrogen turnover and potentially the emissions of nitrous oxide (N2O) and carbon dioxide (CO2). We conducted a study to evaluate the effect of three different irrigation intensities on soil N2O and CO2 fluxes and to determine if irrigation management can be used to mitigate N2O emissions from irrigated cotton on black vertisols in South-Eastern Queensland, Australia. Fluxes were measured over the entire 2009/2010 cotton growing season with a fully automated chamber system that measured emissions on a sub-daily basis. Irrigation intensity had a significant effect on CO2 emission. More frequent irrigation stimulated soil respiration and seasonal CO2 fluxes ranged from 2.7 to 4.1 Mg-C ha−1 for the treatments with the lowest and highest irrigation frequency, respectively. N2O emission happened episodic with highest emissions when heavy rainfall or irrigation coincided with elevated soil mineral N levels and seasonal emissions ranged from 0.80 to 1.07 kg N2O-N ha−1 for the different treatments. Emission factors (EF = proportion of N fertilizer emitted as N2O) over the cotton cropping season, uncorrected for background emissions, ranged from 0.40 to 0.53 % of total N applied for the different treatments. There was no significant effect of the different irrigation treatments on soil N2O fluxes because highest emission happened in all treatments following heavy rainfall caused by a series of summer thunderstorms which overrode the effect of the irrigation treatment. However, higher irrigation intensity increased the cotton yield and therefore reduced the N2O intensity (N2O emission per lint yield) of this cropping system. Our data suggest that there is only limited scope to reduce absolute N2O emissions by different irrigation intensities in irrigated cotton systems with summer dominated rainfall. However, the significant impact of the irrigation treatments on the N2O intensity clearly shows that irrigation can easily be used to optimize the N2O intensity of such a system.
Resumo:
Background and Aims: Irrigation management affects soil water dynamics as well as the soil microbial carbon and nitrogen turnover and potentially the biosphere-atmosphere exchange of greenhouse gasses (GHG). We present a study on the effect of three irrigation treatments on the emissions of nitrous oxide (N2O) from irrigated wheat on black vertisols in South-Eastern Queensland, Australia. Methods: Soil N2O fluxes from wheat were monitored over one season with a fully automated system that measured emissions on a sub-daily basis. Measurements were taken from 3 subplots for each treatment within a randomized split-plot design. Results: Highest N2O emissions occurred after rainfall or irrigation and the amount of irrigation water applied was found to influence the magnitude of these “emission pulses”. Daily N2O emissions varied from -0.74 to 20.46 g N2O-N ha-1 day-1 resulting in seasonal losses ranging from 0.43 to 0.75 kg N2O N ha-1 season -1 for the different irrigation treatments. Emission factors (EF = proportion of N fertilizer emitted as N2O) over the wheat cropping season, uncorrected for background emissions, ranged from 0.2 to 0.4% of total N applied for the different treatments. Highest seasonal N2O emissions were observed in the treatment with the highest irrigation intensity; however, the N2O intensity (N2O emission per crop yield) was highest in the treatment with the lowest irrigation intensity. Conclusions: Our data suggest that timing and amount of irrigation can effectively be used to reduce N2O losses from irrigated agricultural systems; however, in order to develop sustainable mitigation strategies the N2O intensity of a cropping system is an important concept that needs to be taken into account.
Resumo:
Infrastructure forms a vital component in supporting today’s way of life and has a significant role or impact on economic, environmental and social outcomes of the region around it. The design, construction and operation of such assets are a multi-billion dollar industry in Australia alone. Another issue that will play a major role in our way life is that of climate change and the greater concept of sustainability. With limited resources and a changing natural world it is necessary for infrastructure to be developed and maintained in a manner that is sustainable. In order to achieve infrastructure sustainability in operations it is necessary for there to be: a sustainability assessment scheme that provides a scientifically sound and realistic approach to measuring an assets level of sustainability; and, systems and tools to support the making of decisions that result in sustainable outcomes by providing feedback in a timely manner. Having these in place will then help drive the consideration of sustainability during the decision making process for infrastructure operations and maintenance. In this paper we provide two main contributions; a comparison and review of sustainability assessment schemes for infrastructure and their suitability for use in the operations phase; and, a review of decision support systems/tools in the area of infrastructure sustainability in operations. For this paper, sustainability covers not just the environment, but also finance/economic and societal/community aspects as well. This is often referred to as the Triple Bottom Line and forms one of the three dimensions of corporate sustainability [Stapledon, 2004].
Resumo:
Secure communications in distributed Wireless Sensor Networks (WSN) operating under adversarial conditions necessitate efficient key management schemes. In the absence of a priori knowledge of post-deployment network configuration and due to limited resources at sensor nodes, key management schemes cannot be based on post-deployment computations. Instead, a list of keys, called a key-chain, is distributed to each sensor node before the deployment. For secure communication, either two nodes should have a key in common in their key-chains, or they should establish a key through a secure-path on which every link is secured with a key. We first provide a comparative survey of well known key management solutions for WSN. Probabilistic, deterministic and hybrid key management solutions are presented, and they are compared based on their security properties and re-source usage. We provide a taxonomy of solutions, and identify trade-offs in them to conclude that there is no one size-fits-all solution. Second, we design and analyze deterministic and hybrid techniques to distribute pair-wise keys to sensor nodes before the deployment. We present novel deterministic and hybrid approaches based on combinatorial design theory and graph theory for deciding how many and which keys to assign to each key-chain before the sensor network deployment. Performance and security of the proposed schemes are studied both analytically and computationally. Third, we address the key establishment problem in WSN which requires key agreement algorithms without authentication are executed over a secure-path. The length of the secure-path impacts the power consumption and the initialization delay for a WSN before it becomes operational. We formulate the key establishment problem as a constrained bi-objective optimization problem, break it into two sub-problems, and show that they are both NP-Hard and MAX-SNP-Hard. Having established inapproximability results, we focus on addressing the authentication problem that prevents key agreement algorithms to be used directly over a wireless link. We present a fully distributed algorithm where each pair of nodes can establish a key with authentication by using their neighbors as the witnesses.
Resumo:
Authenticated Encryption (AE) is the cryptographic process of providing simultaneous confidentiality and integrity protection to messages. This approach is more efficient than applying a two-step process of providing confidentiality for a message by encrypting the message, and in a separate pass providing integrity protection by generating a Message Authentication Code (MAC). AE using symmetric ciphers can be provided by either stream ciphers with built in authentication mechanisms or block ciphers using appropriate modes of operation. However, stream ciphers have the potential for higher performance and smaller footprint in hardware and/or software than block ciphers. This property makes stream ciphers suitable for resource constrained environments, where storage and computational power are limited. There have been several recent stream cipher proposals that claim to provide AE. These ciphers can be analysed using existing techniques that consider confidentiality or integrity separately; however currently there is no existing framework for the analysis of AE stream ciphers that analyses these two properties simultaneously. This thesis introduces a novel framework for the analysis of AE using stream cipher algorithms. This thesis analyzes the mechanisms for providing confidentiality and for providing integrity in AE algorithms using stream ciphers. There is a greater emphasis on the analysis of the integrity mechanisms, as there is little in the public literature on this, in the context of authenticated encryption. The thesis has four main contributions as follows. The first contribution is the design of a framework that can be used to classify AE stream ciphers based on three characteristics. The first classification applies Bellare and Namprempre's work on the the order in which encryption and authentication processes take place. The second classification is based on the method used for accumulating the input message (either directly or indirectly) into the into the internal states of the cipher to generate a MAC. The third classification is based on whether the sequence that is used to provide encryption and authentication is generated using a single key and initial vector, or two keys and two initial vectors. The second contribution is the application of an existing algebraic method to analyse the confidentiality algorithms of two AE stream ciphers; namely SSS and ZUC. The algebraic method is based on considering the nonlinear filter (NLF) of these ciphers as a combiner with memory. This method enables us to construct equations for the NLF that relate the (inputs, outputs and memory of the combiner) to the output keystream. We show that both of these ciphers are secure from this type of algebraic attack. We conclude that using a keydependent SBox in the NLF twice, and using two different SBoxes in the NLF of ZUC, prevents this type of algebraic attack. The third contribution is a new general matrix based model for MAC generation where the input message is injected directly into the internal state. This model describes the accumulation process when the input message is injected directly into the internal state of a nonlinear filter generator. We show that three recently proposed AE stream ciphers can be considered as instances of this model; namely SSS, NLSv2 and SOBER-128. Our model is more general than a previous investigations into direct injection. Possible forgery attacks against this model are investigated. It is shown that using a nonlinear filter in the accumulation process of the input message when either the input message or the initial states of the register is unknown prevents forgery attacks based on collisions. The last contribution is a new general matrix based model for MAC generation where the input message is injected indirectly into the internal state. This model uses the input message as a controller to accumulate a keystream sequence into an accumulation register. We show that three current AE stream ciphers can be considered as instances of this model; namely ZUC, Grain-128a and Sfinks. We establish the conditions under which the model is susceptible to forgery and side-channel attacks.
Resumo:
In this paper, we present three counterfeiting attacks on the block-wise dependent fragile watermarking schemes. We consider vulnerabilities such as the exploitation of a weak correlation among block-wise dependent watermarks to modify valid watermarked %(medical or other digital) images, where they could still be verified as authentic, though they are actually not. Experimental results successfully demonstrate the practicability and consequences of the proposed attacks for some relevant schemes. The development of the proposed attack models can be used as a means to systematically examine the security levels of similar watermarking schemes.
Resumo:
Water reuse through greywater irrigation has been adopted worldwide and has been proposed as a potential sustainable solution to increased water demands. Despite widespread adoption there is limited domestic knowledge of greywater reuse, there is no pressure to produce lowlevel phosphorus products and current guidelines and legislation, such as those in Australia, may be inadequate due to the lack of long-term data to provide a sound scientific basis. Research has clearly identified phosphorus as a potential environmental risk to waterways from many forms of irrigation. To assess the sustainability of greywater irrigation, this study compared four residential lots that had been irrigated with greywater for four years and adjacent non-irrigated lots that acted as controls. Each lot was monitored for the volume of greywater applied and selected physic-chemical water quality parameters and soil chemistry profiles were analysed. The non-irrigated soil profiles showed low levels of phosphorus and were used as controls. The Mechlich3 Phosphorus ratio (M3PSR) and Phosphate Environmental Risk Index (PERI) were used to determine the environmental risk of phosphorus leaching from the irrigated soils. Soil phosphorus concentrations were compared to theoretical greywater irrigation loadings. The measured phosphorus soil concentrations and the estimated greywater loadings were of similar magnitude. Sustainable greywater reuse is possible; however incorrect use and/or a lack of understanding of how household products affect greywater can result in phosphorus posing a significant risk to the environment.
Resumo:
Basing signature schemes on strong lattice problems has been a long standing open issue. Today, two families of lattice-based signature schemes are known: the ones based on the hash-and-sign construction of Gentry et al.; and Lyubashevsky’s schemes, which are based on the Fiat-Shamir framework. In this paper we show for the first time how to adapt the schemes of Lyubashevsky to the ring signature setting. In particular we transform the scheme of ASIACRYPT 2009 into a ring signature scheme that provides strong properties of security under the random oracle model. Anonymity is ensured in the sense that signatures of different users are within negligible statistical distance even under full key exposure. In fact, the scheme satisfies a notion which is stronger than the classical full key exposure setting as even if the keypair of the signing user is adversarially chosen, the statistical distance between signatures of different users remains negligible. Considering unforgeability, the best lattice-based ring signature schemes provide either unforgeability against arbitrary chosen subring attacks or insider corruption in log-sized rings. In this paper we present two variants of our scheme. In the basic one, unforgeability is ensured in those two settings. Increasing signature and key sizes by a factor k (typically 80 − 100), we provide a variant in which unforgeability is ensured against insider corruption attacks for arbitrary rings. The technique used is pretty general and can be adapted to other existing schemes.
Resumo:
Multiple-time signatures are digital signature schemes where the signer is able to sign a predetermined number of messages. They are interesting cryptographic primitives because they allow to solve many important cryptographic problems, and at the same time offer substantial efficiency advantage over ordinary digital signature schemes like RSA. Multiple-time signature schemes have found numerous applications, in ordinary, on-line/off-line, forward-secure signatures, and multicast/stream authentication. We propose a multiple-time signature scheme with very efficient signing and verifying. Our construction is based on a combination of one-way functions and cover-free families, and it is secure against the adaptive chosen-message attack.
Resumo:
The purpose of this paper is to describe a new decomposition construction for perfect secret sharing schemes with graph access structures. The previous decomposition construction proposed by Stinson is a recursive method that uses small secret sharing schemes as building blocks in the construction of larger schemes. When the Stinson method is applied to the graph access structures, the number of such “small” schemes is typically exponential in the number of the participants, resulting in an exponential algorithm. Our method has the same flavor as the Stinson decomposition construction; however, the linear programming problem involved in the construction is formulated in such a way that the number of “small” schemes is polynomial in the size of the participants, which in turn gives rise to a polynomial time construction. We also show that if we apply the Stinson construction to the “small” schemes arising from our new construction, both have the same information rate.
Resumo:
We consider the problem of increasing the threshold parameter of a secret-sharing scheme after the setup (share distribution) phase, without further communication between the dealer and the shareholders. Previous solutions to this problem require one to start off with a nonstandard scheme designed specifically for this purpose, or to have communication between shareholders. In contrast, we show how to increase the threshold parameter of the standard Shamir secret-sharing scheme without communication between the shareholders. Our technique can thus be applied to existing Shamir schemes even if they were set up without consideration to future threshold increases. Our method is a new positive cryptographic application for lattice reduction algorithms, inspired by recent work on lattice-based list decoding of Reed-Solomon codes with noise bounded in the Lee norm. We use fundamental results from the theory of lattices (geometry of numbers) to prove quantitative statements about the information-theoretic security of our construction. These lattice-based security proof techniques may be of independent interest.
Resumo:
Enterprise resource planning (ERP) systems are rapidly being combined with “big data” analytics processes and publicly available “open data sets”, which are usually outside the arena of the enterprise, to expand activity through better service to current clients as well as identifying new opportunities. Moreover, these activities are now largely based around relevant software systems hosted in a “cloud computing” environment. However, the over 50- year old phrase related to mistrust in computer systems, namely “garbage in, garbage out” or “GIGO”, is used to describe problems of unqualified and unquestioning dependency on information systems. However, a more relevant GIGO interpretation arose sometime later, namely “garbage in, gospel out” signifying that with large scale information systems based around ERP and open datasets as well as “big data” analytics, particularly in a cloud environment, the ability to verify the authenticity and integrity of the data sets used may be almost impossible. In turn, this may easily result in decision making based upon questionable results which are unverifiable. Illicit “impersonation” of and modifications to legitimate data sets may become a reality while at the same time the ability to audit any derived results of analysis may be an important requirement, particularly in the public sector. The pressing need for enhancement of identity, reliability, authenticity and audit services, including naming and addressing services, in this emerging environment is discussed in this paper. Some current and appropriate technologies currently being offered are also examined. However, severe limitations in addressing the problems identified are found and the paper proposes further necessary research work for the area. (Note: This paper is based on an earlier unpublished paper/presentation “Identity, Addressing, Authenticity and Audit Requirements for Trust in ERP, Analytics and Big/Open Data in a ‘Cloud’ Computing Environment: A Review and Proposal” presented to the Department of Accounting and IT, College of Management, National Chung Chen University, 20 November 2013.)
Resumo:
A multi-secret sharing scheme allows several secrets to be shared amongst a group of participants. In 2005, Shao and Cao developed a verifiable multi-secret sharing scheme where each participant’s share can be used several times which reduces the number of interactions between the dealer and the group members. In addition some secrets may require a higher security level than others involving the need for different threshold values. Recently Chan and Chang designed such a scheme but their construction only allows a single secret to be shared per threshold value. In this article we combine the previous two approaches to design a multiple time verifiable multi-secret sharing scheme where several secrets can be shared for each threshold value. Since the running time is an important factor for practical applications, we will provide a complexity comparison of our combined approach with respect to the previous schemes.
Resumo:
The construction and operation of infrastructure assets can have significant impact on society and the region. Using a sustainability assessment framework can be an effective means to build sustainability aspects into the design, construction and operation of infrastructure assets. The conventional evaluation processes and procedures for infrastructure projects do not necessarily measure the qualitative/quantitative effectiveness of all aspects of sustainability: environment, social wellbeing and economy. As a result, a few infrastructure sustainability rating schemes have been developed with a view to assess the level of sustainability attained in the infrastructure projects. These include: Infrastructure Sustainability (Australia); CEEQUAL (UK); and Envision (USA). In addition, road sector specific sustainability rating schemes such as Greenroads (USA) and Invest (Australia) have also been developed. These schemes address several aspects of sustainability with varying emphasis (weightings) on areas such as: use of resources; emission, pollution and waste; ecology; people and place; management and governance; and innovation. The attainment of sustainability of an infrastructure project depends largely on addressing the whole-of-life environmental issues. This study has analysed the rating schemes’ coverage of different environmental components for the road infrastructure under the five phases of a project: material, construction, use, maintenance and end-of-life. This is based on a comprehensive life cycle assessment (LCA) system boundary. The findings indicate that there is a need for the schemes to consider key (high impact) life cycle environmental components such as traffic congestion during construction, rolling resistance due to surface roughness and structural stiffness of the pavement, albedo, lighting, and end-of-life management (recycling) to deliver sustainable road projects.