445 resultados para Bodily integrity
Resumo:
Formation of Reduced Emissions from Deforestation and Degradation (REDD+) policy within the international climate regime has raised a number of discussions about ‘justice’. REDD+ aims to provide an incentive for developing countries to preserve or increase the amount of carbon stored in their forested areas. Governance of REDD+ is multi-layered: at the international level, a guiding framework must be determined; at the national level, strong legal frameworks are a pre-requisite to ensure both public and private investor confidence and at the sub-national level, forest-dependent peoples need to agree to participate as stewards of forest carbon project areas. At the international level the overall objective of REDD+ is yet to be determined, with competing mitigation, biological and justice agendas. Existing international law pertaining to the environment (international environmental principles and law, IEL) and human rights (international human rights law, IHRL) should inform the development of international and national REDD+ policy especially in relation to ensuring the environmental integrity of projects and participation and benefit-sharing rights for forest dependent communities. National laws applicable to REDD+ must accommodate the needs of all stakeholders and articulate boundaries which define their interactions, paying particular attention to ensuring that vulnerable groups are protected. This paper i) examines justice theories and IEL and IHRL to inform our understanding of what ‘justice’ means in the context of REDD+, and ii) applies international law to create a reference tool for policy-makers dealing with the complex sub-debates within this emerging climate policy. We achieve this by: 1) Briefly outlining theories of justice (for example – perspectives offered by anthropogenic and ecocentric approaches, and views from ‘green economics’). 2) Commenting on what ‘climate justice’ means in the context of REDD+. 3) Outlining a selection of IEL and IHRL principles and laws to inform our understanding of ‘justice’ in this policy realm (for example – common but differentiated responsibilities, the precautionary principle, sovereignty and prevention drawn from the principles of IEL, the UNFCCC and CBD as relevant conventions of international environmental law; and UNDRIP and the Declaration on the Right to Development as applicable international human rights instruments) 4) Noting how this informs what ‘justice’ is for different REDD+ stakeholders 5) Considering how current law-making (at both the international and national levels) reflects these principles and rules drawn from international law 6) Presenting how international law can inform policy-making by providing a reference tool of applicable international law and how it could be applied to different issues linked to REDD+. As such, this paper will help scholars and policy-makers to understand how international law can assist us to both conceptualise and embody ‘justice’ within frameworks for REDD+ at both the international and national levels.
Resumo:
Determining the properties and integrity of subchondral bone in the developmental stages of osteoarthritis, especially in a form that can facilitate real-time characterization for diagnostic and decision-making purposes, is still a matter for research and development. This paper presents relationships between near infrared absorption spectra and properties of subchondral bone obtained from 3 models of osteoarthritic degeneration induced in laboratory rats via: (i) menisectomy (MSX); (ii) anterior cruciate ligament transaction (ACL); and (iii) intra-articular injection of mono-ido-acetate (1 mg) (MIA), in the right knee joint, with 12 rats per model group (N = 36). After 8 weeks, the animals were sacrificed and knee joints were collected. A custom-made diffuse reflectance NIR probe of diameter 5 mm was placed on the tibial surface and spectral data were acquired from each specimen in the wavenumber range 4000–12 500 cm− 1. After spectral acquisition, micro computed tomography (micro-CT) was performed on the samples and subchondral bone parameters namely: bone volume (BV) and bone mineral density (BMD) were extracted from the micro-CT data. Statistical correlation was then conducted between these parameters and regions of the near infrared spectra using multivariate techniques including principal component analysis (PCA), discriminant analysis (DA), and partial least squares (PLS) regression. Statistically significant linear correlations were found between the near infrared absorption spectra and subchondral bone BMD (R2 = 98.84%) and BV (R2 = 97.87%). In conclusion, near infrared spectroscopic probing can be used to detect, qualify and quantify changes in the composition of the subchondral bone, and could potentially assist in distinguishing healthy from OA bone as demonstrated with our laboratory rat models.
Resumo:
Low-cost level crossings are often criticized as being unsafe. Does a SIL (safety integrity level) rating make the railway crossing any safer? This paper discusses how a supporting argument might be made for low-cost level crossing warning devices with lower levels of safety integrity and issues such as risk tolerability and derivation of tolerable hazard rates for system-level hazards. As part of the design of such systems according to fail-safe principles, the paper considers the assumptions around the pre-defined safe states of existing warning devices and how human factors issues around such states can give rise to additional hazards.
Resumo:
Railway bridges deteriorate with age. Factors such as environmental effects on different materials of a bridge, variation of loads, fatigue, etc will reduce the remaining life of bridges. Bridges are currently rated individually for maintenance and repair actions according to the structural conditions of their elements. Dealing with thousands of bridges and several factors that cause deterioration, makes the rating process extremely complicated. Current simplified but practical rating methods are not based on an accurate structural condition assessment system. On the other hand, the sophisticated but more accurate methods are only used for a single bridge or particular types of bridges. It is therefore necessary to develop a practical and accurate system which will be capable of rating a network of railway bridges. This paper introduces a new method for rating a network of bridges based on their current and future structural conditions. The method identifies typical bridges representing a group of railway bridges. The most crucial agents will be determined and categorized to criticality and vulnerability factors. Classification based on structural configuration, loading, and critical deterioration factors will be conducted. Finally a rating method for a network of railway bridges that takes into account the effects of damaged structural components due to variations in loading and environmental conditions on the integrity of the whole structure will be proposed. The outcome of this research is expected to significantly improve the rating methods for railway bridges by considering the unique characteristics of different factors and incorporating the correlation between them.
Resumo:
Railway bridges deteriorate with age. Factors such as environmental effects on different materials of a bridge, variation of loads, fatigue, etc. will reduce the remaining life of bridges. Dealing with thousands of bridges and several factors that cause deterioration, makes the rating process extremely complicated. Current simplified but practical methods of rating a network of bridges are not based on an accurate structural condition assessment system. On the other hand, the sophisticated but more accurate methods are only used for a single bridge or particular types of bridges. It is therefore necessary to develop a practical and accurate system, which will be capable of rating a network of railway bridges. This article introduces a new method to rate a network of bridges based on their current and future structural conditions. The method identifies typical bridges representing a group of railway bridges. The most crucial agents will be determined and categorized to criticality and vulnerability factors. Classification based on structural configuration, loading, and critical deterioration factors will be conducted. Finally a rating method for a network of railway bridges that takes into account the effects of damaged structural components due to variations in loading and environmental conditions on the integrity of the whole structure will be proposed. The outcome of this article is expected to significantly improve the rating methods for railway bridges by considering the unique characteristics of different factors and incorporating the correlation among them.
Resumo:
In the absence of a benchmarking mechanism specifically designed for local requirements and characteristics, a carbon dioxide footprint assessment and labelling scheme for construction materials is urgently needed to promote carbon dioxide reduction in the construction industry. This paper reports on a recent interview survey of 18 senior industry practitioners in Hong Kong to elicit their knowledge and opinions concerning the potential of such a carbon dioxide labelling scheme. The results of this research indicate the following. A well-designed carbon dioxide label could stimulate demand for low carbon dioxide construction materials. The assessment of carbon dioxide emissions should be extended to different stages of material lifecycles. The benchmarks for low carbon dioxide construction materials should be based on international standards but without sacrificing local integrity. Administration and monitoring of the carbon dioxide labelling scheme could be entrusted to an impartial and independent certification body. The implementation of any carbon dioxide labelling schemes should be on a voluntary basis. Cost, functionality, quality and durability are unlikely to be replaced by environmental considerations in the absence of any compelling incentives or penalties. There are difficulties in developing and operating a suitable scheme, particularly in view of the large data demands involved, reluctance in using low carbon dioxide materials and limited environmental awareness.
Resumo:
The conventional mechanical properties of articular cartilage, such as compressive stiffness, have been demonstrated to be limited in their capacity to distinguish intact (visually normal) from degraded cartilage samples. In this paper, we explore the correlation between a new mechanical parameter, namely the reswelling of articular cartilage following unloading from a given compressive load, and the near infrared (NIR) spectrum. The capacity to distinguish mechanically intact from proteoglycan-depleted tissue relative to the "reswelling" characteristic was first established, and the result was subsequently correlated with the NIR spectral data of the respective tissue samples. To achieve this, normal intact and enzymatically degraded samples were subjected to both NIR probing and mechanical compression based on a load-unload-reswelling protocol. The parameter δ(r), characteristic of the osmotic "reswelling" of the matrix after unloading to a constant small load in the order of the osmotic pressure of cartilage, was obtained for the different sample types. Multivariate statistics was employed to determine the degree of correlation between δ(r) and the NIR absorption spectrum of relevant specimens using Partial Least Squared (PLS) regression. The results show a strong relationship (R(2)=95.89%, p<0.0001) between the spectral data and δ(r). This correlation of δ(r) with NIR spectral data suggests the potential for determining the reswelling characteristics non-destructively. It was also observed that δ(r) values bear a significant relationship with the cartilage matrix integrity, indicated by its proteoglycan content, and can therefore differentiate between normal and artificially degraded proteoglycan-depleted cartilage samples. It is therefore argued that the reswelling of cartilage, which is both biochemical (osmotic) and mechanical (hydrostatic pressure) in origin, could be a strong candidate for characterizing the tissue, especially in regions surrounding focal cartilage defects in joints.
Resumo:
Fire safety design is important to eliminate the loss of property and lives during fire events. Gypsum plasterboard is widely used as a fire safety material in the building industry all over the world. It contains gypsum (CaSO4.2H2O) and Calcium Carbonate (CaCO3) and most importantly free and chemically bound water in its crystal structure. The dehydration of the gypsum and the decomposition of Calcium Carbonate absorb heat, which gives the gypsum plasterboard fire resistant qualities. Currently plasterboard manufacturers use additives such as vermiculite to overcome shrinkage of gypsum core and glass fibre to bridge shrinkage cracks and enhance the integrity of board during calcination and after the loss of paper facings in fires. Past research has also attempted to reduce the thermal conductivity of plasterboards using fillers. However, no research has been undertaken to enhance the specific heat of plasterboard and the points of dehydration using chemical additives and fillers. Hence detailed experimental studies of powdered samples of plasterboard mixed with chemical additives and fillers in varying proportions were conducted. These tests showed the enhancement of specific heat of plasterboard. Numerical models were also developed to investigate the thermal performance of enhanced plasterboards under standard fire conditions. The results showed that the use of these enhanced plasterboards in steel wall systems can significantly improve their fire performance. This paper presents the details of this research and the results that can be used to enhance the fire safety of steel wall systems commonly used in buildings.
Resumo:
Genetically distinct checkpoints, activated as a consequence of either DNA replication arrest or ionizing radiation-induced DNA damage, integrate DNA repair responses into the cell cycle programme. The ataxia-telangiectasia mutated (ATM) protein kinase blocks cell cycle progression in response to DNA double strand breaks, whereas the related ATR is important in maintaining the integrity of the DNA replication apparatus. Here, we show that thymidine, which slows the progression of replication forks by depleting cellular pools of dCTP, induces a novel DNA damage response that, uniquely, depends on both ATM and ATR. Thymidine induces ATM-mediated phosphorylation of Chk2 and NBS1 and an ATM-independent phosphorylation of Chk1 and SMC1. AT cells exposed to thymidine showed decreased viability and failed to induce homologous recombination repair (HRR). Taken together, our results implicate ATM in the HRR-mediated rescue of replication forks impaired by thymidine treatment.
Resumo:
Homologous recombination repair (HRR) is required for both the repair of DNA double strand breaks (DSBs) and the maintenance of the integrity of DNA replication forks. To determine the effect of a mutant allele of the RAD51 paralog XRCC2 (342delT) found in an HRR-defective tumour cell line, 342delT was introduced into HRR proficient cells containing a recombination reporter substrate. In one set of transfectants, expression of 342delT conferred sensitivity to thymidine and mitomycin C and suppressed HRR induced at the recombination reporter by thymidine but not by DSBs. In a second set of transfectants, the expression of 342delT was accompanied by a decreased level of the full-length XRCC2. These cells were defective in the induction of HRR by either thymidine or DSBs. Thus 342delT suppresses recombination induced by thymidine in a dominant negative manner while recombination induced by DSBs appears to depend upon the level of XRCC2 as well as the expression of the mutant XRCC2 allele. These results suggest that HRR pathways responding to stalled replication forks or DSBs are genetically distinguishable. They further suggest a critical role for XRCC2 in HRR at replication forks, possibly in the loading of RAD51 onto gapped DNA.
Resumo:
Ambiguity resolution plays a crucial role in real time kinematic GNSS positioning which gives centimetre precision positioning results if all the ambiguities in each epoch are correctly fixed to integers. However, the incorrectly fixed ambiguities can result in large positioning offset up to several meters without notice. Hence, ambiguity validation is essential to control the ambiguity resolution quality. Currently, the most popular ambiguity validation is ratio test. The criterion of ratio test is often empirically determined. Empirically determined criterion can be dangerous, because a fixed criterion cannot fit all scenarios and does not directly control the ambiguity resolution risk. In practice, depending on the underlying model strength, the ratio test criterion can be too conservative for some model and becomes too risky for others. A more rational test method is to determine the criterion according to the underlying model and user requirement. Miss-detected incorrect integers will lead to a hazardous result, which should be strictly controlled. In ambiguity resolution miss-detected rate is often known as failure rate. In this paper, a fixed failure rate ratio test method is presented and applied in analysis of GPS and Compass positioning scenarios. A fixed failure rate approach is derived from the integer aperture estimation theory, which is theoretically rigorous. The criteria table for ratio test is computed based on extensive data simulations in the approach. The real-time users can determine the ratio test criterion by looking up the criteria table. This method has been applied in medium distance GPS ambiguity resolution but multi-constellation and high dimensional scenarios haven't been discussed so far. In this paper, a general ambiguity validation model is derived based on hypothesis test theory, and fixed failure rate approach is introduced, especially the relationship between ratio test threshold and failure rate is examined. In the last, Factors that influence fixed failure rate approach ratio test threshold is discussed according to extensive data simulation. The result shows that fixed failure rate approach is a more reasonable ambiguity validation method with proper stochastic model.
Resumo:
The exchange between the body and architecture walks a fine line between violence and pleasure. It is through the body that the subject engages with the architectural act, not via thought or reason, but through action. The materiality of architecture is the often the catalyst for some intense association; the wall that defines gender or class, the double bolted door that incarcerates, the enclosed privacy of the bedroom to the love affair. Architecture is the physical manifestation of Lefebvre’s inscribed space. It enacts the social and political systems through bodily occupation. Architecture, when tested by the occupation of bodies, anchors ideology in both space and time. The architect’s script can be powerful when rehearsed honestly to the building’s intentions and just as beautiful when rebuked by the act of protest or unfaithful occupation. This research examines this fine line of violence and pleasure in architecture through performance, in the work of Bryony Lavin’s play Stockholm and Revolving Door by Allora & Calzadilla as part of the recent Kaldor Public Art Projects exhibition 13 Rooms in Sydney. The research is underpinned by the work of Architect and theorist, Bernard Tschumi in his two essays, Violence of Architecture and The Pleasure of Architecture. Studying architecture through the lens of performance shifts the focus of examination from pure thought to the body; because architecture is occupied through the body and not the mind.
Resumo:
Authenticated Encryption (AE) is the cryptographic process of providing simultaneous confidentiality and integrity protection to messages. This approach is more efficient than applying a two-step process of providing confidentiality for a message by encrypting the message, and in a separate pass providing integrity protection by generating a Message Authentication Code (MAC). AE using symmetric ciphers can be provided by either stream ciphers with built in authentication mechanisms or block ciphers using appropriate modes of operation. However, stream ciphers have the potential for higher performance and smaller footprint in hardware and/or software than block ciphers. This property makes stream ciphers suitable for resource constrained environments, where storage and computational power are limited. There have been several recent stream cipher proposals that claim to provide AE. These ciphers can be analysed using existing techniques that consider confidentiality or integrity separately; however currently there is no existing framework for the analysis of AE stream ciphers that analyses these two properties simultaneously. This thesis introduces a novel framework for the analysis of AE using stream cipher algorithms. This thesis analyzes the mechanisms for providing confidentiality and for providing integrity in AE algorithms using stream ciphers. There is a greater emphasis on the analysis of the integrity mechanisms, as there is little in the public literature on this, in the context of authenticated encryption. The thesis has four main contributions as follows. The first contribution is the design of a framework that can be used to classify AE stream ciphers based on three characteristics. The first classification applies Bellare and Namprempre's work on the the order in which encryption and authentication processes take place. The second classification is based on the method used for accumulating the input message (either directly or indirectly) into the into the internal states of the cipher to generate a MAC. The third classification is based on whether the sequence that is used to provide encryption and authentication is generated using a single key and initial vector, or two keys and two initial vectors. The second contribution is the application of an existing algebraic method to analyse the confidentiality algorithms of two AE stream ciphers; namely SSS and ZUC. The algebraic method is based on considering the nonlinear filter (NLF) of these ciphers as a combiner with memory. This method enables us to construct equations for the NLF that relate the (inputs, outputs and memory of the combiner) to the output keystream. We show that both of these ciphers are secure from this type of algebraic attack. We conclude that using a keydependent SBox in the NLF twice, and using two different SBoxes in the NLF of ZUC, prevents this type of algebraic attack. The third contribution is a new general matrix based model for MAC generation where the input message is injected directly into the internal state. This model describes the accumulation process when the input message is injected directly into the internal state of a nonlinear filter generator. We show that three recently proposed AE stream ciphers can be considered as instances of this model; namely SSS, NLSv2 and SOBER-128. Our model is more general than a previous investigations into direct injection. Possible forgery attacks against this model are investigated. It is shown that using a nonlinear filter in the accumulation process of the input message when either the input message or the initial states of the register is unknown prevents forgery attacks based on collisions. The last contribution is a new general matrix based model for MAC generation where the input message is injected indirectly into the internal state. This model uses the input message as a controller to accumulate a keystream sequence into an accumulation register. We show that three current AE stream ciphers can be considered as instances of this model; namely ZUC, Grain-128a and Sfinks. We establish the conditions under which the model is susceptible to forgery and side-channel attacks.
Resumo:
Availability has become a primary goal of information security and is as significant as other goals, in particular, confidentiality and integrity. Maintaining availability of essential services on the public Internet is an increasingly difficult task in the presence of sophisticated attackers. Attackers may abuse limited computational resources of a service provider and thus managing computational costs is a key strategy for achieving the goal of availability. In this thesis we focus on cryptographic approaches for managing computational costs, in particular computational effort. We focus on two cryptographic techniques: computational puzzles in cryptographic protocols and secure outsourcing of cryptographic computations. This thesis contributes to the area of cryptographic protocols in the following ways. First we propose the most efficient puzzle scheme based on modular exponentiations which, unlike previous schemes of the same type, involves only a few modular multiplications for solution verification; our scheme is provably secure. We then introduce a new efficient gradual authentication protocol by integrating a puzzle into a specific signature scheme. Our software implementation results for the new authentication protocol show that our approach is more efficient and effective than the traditional RSA signature-based one and improves the DoSresilience of Secure Socket Layer (SSL) protocol, the most widely used security protocol on the Internet. Our next contributions are related to capturing a specific property that enables secure outsourcing of cryptographic tasks in partial-decryption. We formally define the property of (non-trivial) public verifiability for general encryption schemes, key encapsulation mechanisms (KEMs), and hybrid encryption schemes, encompassing public-key, identity-based, and tag-based encryption avors. We show that some generic transformations and concrete constructions enjoy this property and then present a new public-key encryption (PKE) scheme having this property and proof of security under the standard assumptions. Finally, we combine puzzles with PKE schemes for enabling delayed decryption in applications such as e-auctions and e-voting. For this we first introduce the notion of effort-release PKE (ER-PKE), encompassing the well-known timedrelease encryption and encapsulated key escrow techniques. We then present a security model for ER-PKE and a generic construction of ER-PKE complying with our security notion.
Resumo:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.