939 resultados para Integrity
Resumo:
The conventional mechanical properties of articular cartilage, such as compressive stiffness, have been demonstrated to be limited in their capacity to distinguish intact (visually normal) from degraded cartilage samples. In this paper, we explore the correlation between a new mechanical parameter, namely the reswelling of articular cartilage following unloading from a given compressive load, and the near infrared (NIR) spectrum. The capacity to distinguish mechanically intact from proteoglycan-depleted tissue relative to the "reswelling" characteristic was first established, and the result was subsequently correlated with the NIR spectral data of the respective tissue samples. To achieve this, normal intact and enzymatically degraded samples were subjected to both NIR probing and mechanical compression based on a load-unload-reswelling protocol. The parameter δ(r), characteristic of the osmotic "reswelling" of the matrix after unloading to a constant small load in the order of the osmotic pressure of cartilage, was obtained for the different sample types. Multivariate statistics was employed to determine the degree of correlation between δ(r) and the NIR absorption spectrum of relevant specimens using Partial Least Squared (PLS) regression. The results show a strong relationship (R(2)=95.89%, p<0.0001) between the spectral data and δ(r). This correlation of δ(r) with NIR spectral data suggests the potential for determining the reswelling characteristics non-destructively. It was also observed that δ(r) values bear a significant relationship with the cartilage matrix integrity, indicated by its proteoglycan content, and can therefore differentiate between normal and artificially degraded proteoglycan-depleted cartilage samples. It is therefore argued that the reswelling of cartilage, which is both biochemical (osmotic) and mechanical (hydrostatic pressure) in origin, could be a strong candidate for characterizing the tissue, especially in regions surrounding focal cartilage defects in joints.
Resumo:
Fire safety design is important to eliminate the loss of property and lives during fire events. Gypsum plasterboard is widely used as a fire safety material in the building industry all over the world. It contains gypsum (CaSO4.2H2O) and Calcium Carbonate (CaCO3) and most importantly free and chemically bound water in its crystal structure. The dehydration of the gypsum and the decomposition of Calcium Carbonate absorb heat, which gives the gypsum plasterboard fire resistant qualities. Currently plasterboard manufacturers use additives such as vermiculite to overcome shrinkage of gypsum core and glass fibre to bridge shrinkage cracks and enhance the integrity of board during calcination and after the loss of paper facings in fires. Past research has also attempted to reduce the thermal conductivity of plasterboards using fillers. However, no research has been undertaken to enhance the specific heat of plasterboard and the points of dehydration using chemical additives and fillers. Hence detailed experimental studies of powdered samples of plasterboard mixed with chemical additives and fillers in varying proportions were conducted. These tests showed the enhancement of specific heat of plasterboard. Numerical models were also developed to investigate the thermal performance of enhanced plasterboards under standard fire conditions. The results showed that the use of these enhanced plasterboards in steel wall systems can significantly improve their fire performance. This paper presents the details of this research and the results that can be used to enhance the fire safety of steel wall systems commonly used in buildings.
Resumo:
Genetically distinct checkpoints, activated as a consequence of either DNA replication arrest or ionizing radiation-induced DNA damage, integrate DNA repair responses into the cell cycle programme. The ataxia-telangiectasia mutated (ATM) protein kinase blocks cell cycle progression in response to DNA double strand breaks, whereas the related ATR is important in maintaining the integrity of the DNA replication apparatus. Here, we show that thymidine, which slows the progression of replication forks by depleting cellular pools of dCTP, induces a novel DNA damage response that, uniquely, depends on both ATM and ATR. Thymidine induces ATM-mediated phosphorylation of Chk2 and NBS1 and an ATM-independent phosphorylation of Chk1 and SMC1. AT cells exposed to thymidine showed decreased viability and failed to induce homologous recombination repair (HRR). Taken together, our results implicate ATM in the HRR-mediated rescue of replication forks impaired by thymidine treatment.
Resumo:
Homologous recombination repair (HRR) is required for both the repair of DNA double strand breaks (DSBs) and the maintenance of the integrity of DNA replication forks. To determine the effect of a mutant allele of the RAD51 paralog XRCC2 (342delT) found in an HRR-defective tumour cell line, 342delT was introduced into HRR proficient cells containing a recombination reporter substrate. In one set of transfectants, expression of 342delT conferred sensitivity to thymidine and mitomycin C and suppressed HRR induced at the recombination reporter by thymidine but not by DSBs. In a second set of transfectants, the expression of 342delT was accompanied by a decreased level of the full-length XRCC2. These cells were defective in the induction of HRR by either thymidine or DSBs. Thus 342delT suppresses recombination induced by thymidine in a dominant negative manner while recombination induced by DSBs appears to depend upon the level of XRCC2 as well as the expression of the mutant XRCC2 allele. These results suggest that HRR pathways responding to stalled replication forks or DSBs are genetically distinguishable. They further suggest a critical role for XRCC2 in HRR at replication forks, possibly in the loading of RAD51 onto gapped DNA.
Resumo:
Ambiguity resolution plays a crucial role in real time kinematic GNSS positioning which gives centimetre precision positioning results if all the ambiguities in each epoch are correctly fixed to integers. However, the incorrectly fixed ambiguities can result in large positioning offset up to several meters without notice. Hence, ambiguity validation is essential to control the ambiguity resolution quality. Currently, the most popular ambiguity validation is ratio test. The criterion of ratio test is often empirically determined. Empirically determined criterion can be dangerous, because a fixed criterion cannot fit all scenarios and does not directly control the ambiguity resolution risk. In practice, depending on the underlying model strength, the ratio test criterion can be too conservative for some model and becomes too risky for others. A more rational test method is to determine the criterion according to the underlying model and user requirement. Miss-detected incorrect integers will lead to a hazardous result, which should be strictly controlled. In ambiguity resolution miss-detected rate is often known as failure rate. In this paper, a fixed failure rate ratio test method is presented and applied in analysis of GPS and Compass positioning scenarios. A fixed failure rate approach is derived from the integer aperture estimation theory, which is theoretically rigorous. The criteria table for ratio test is computed based on extensive data simulations in the approach. The real-time users can determine the ratio test criterion by looking up the criteria table. This method has been applied in medium distance GPS ambiguity resolution but multi-constellation and high dimensional scenarios haven't been discussed so far. In this paper, a general ambiguity validation model is derived based on hypothesis test theory, and fixed failure rate approach is introduced, especially the relationship between ratio test threshold and failure rate is examined. In the last, Factors that influence fixed failure rate approach ratio test threshold is discussed according to extensive data simulation. The result shows that fixed failure rate approach is a more reasonable ambiguity validation method with proper stochastic model.
Resumo:
Authenticated Encryption (AE) is the cryptographic process of providing simultaneous confidentiality and integrity protection to messages. This approach is more efficient than applying a two-step process of providing confidentiality for a message by encrypting the message, and in a separate pass providing integrity protection by generating a Message Authentication Code (MAC). AE using symmetric ciphers can be provided by either stream ciphers with built in authentication mechanisms or block ciphers using appropriate modes of operation. However, stream ciphers have the potential for higher performance and smaller footprint in hardware and/or software than block ciphers. This property makes stream ciphers suitable for resource constrained environments, where storage and computational power are limited. There have been several recent stream cipher proposals that claim to provide AE. These ciphers can be analysed using existing techniques that consider confidentiality or integrity separately; however currently there is no existing framework for the analysis of AE stream ciphers that analyses these two properties simultaneously. This thesis introduces a novel framework for the analysis of AE using stream cipher algorithms. This thesis analyzes the mechanisms for providing confidentiality and for providing integrity in AE algorithms using stream ciphers. There is a greater emphasis on the analysis of the integrity mechanisms, as there is little in the public literature on this, in the context of authenticated encryption. The thesis has four main contributions as follows. The first contribution is the design of a framework that can be used to classify AE stream ciphers based on three characteristics. The first classification applies Bellare and Namprempre's work on the the order in which encryption and authentication processes take place. The second classification is based on the method used for accumulating the input message (either directly or indirectly) into the into the internal states of the cipher to generate a MAC. The third classification is based on whether the sequence that is used to provide encryption and authentication is generated using a single key and initial vector, or two keys and two initial vectors. The second contribution is the application of an existing algebraic method to analyse the confidentiality algorithms of two AE stream ciphers; namely SSS and ZUC. The algebraic method is based on considering the nonlinear filter (NLF) of these ciphers as a combiner with memory. This method enables us to construct equations for the NLF that relate the (inputs, outputs and memory of the combiner) to the output keystream. We show that both of these ciphers are secure from this type of algebraic attack. We conclude that using a keydependent SBox in the NLF twice, and using two different SBoxes in the NLF of ZUC, prevents this type of algebraic attack. The third contribution is a new general matrix based model for MAC generation where the input message is injected directly into the internal state. This model describes the accumulation process when the input message is injected directly into the internal state of a nonlinear filter generator. We show that three recently proposed AE stream ciphers can be considered as instances of this model; namely SSS, NLSv2 and SOBER-128. Our model is more general than a previous investigations into direct injection. Possible forgery attacks against this model are investigated. It is shown that using a nonlinear filter in the accumulation process of the input message when either the input message or the initial states of the register is unknown prevents forgery attacks based on collisions. The last contribution is a new general matrix based model for MAC generation where the input message is injected indirectly into the internal state. This model uses the input message as a controller to accumulate a keystream sequence into an accumulation register. We show that three current AE stream ciphers can be considered as instances of this model; namely ZUC, Grain-128a and Sfinks. We establish the conditions under which the model is susceptible to forgery and side-channel attacks.
Resumo:
Availability has become a primary goal of information security and is as significant as other goals, in particular, confidentiality and integrity. Maintaining availability of essential services on the public Internet is an increasingly difficult task in the presence of sophisticated attackers. Attackers may abuse limited computational resources of a service provider and thus managing computational costs is a key strategy for achieving the goal of availability. In this thesis we focus on cryptographic approaches for managing computational costs, in particular computational effort. We focus on two cryptographic techniques: computational puzzles in cryptographic protocols and secure outsourcing of cryptographic computations. This thesis contributes to the area of cryptographic protocols in the following ways. First we propose the most efficient puzzle scheme based on modular exponentiations which, unlike previous schemes of the same type, involves only a few modular multiplications for solution verification; our scheme is provably secure. We then introduce a new efficient gradual authentication protocol by integrating a puzzle into a specific signature scheme. Our software implementation results for the new authentication protocol show that our approach is more efficient and effective than the traditional RSA signature-based one and improves the DoSresilience of Secure Socket Layer (SSL) protocol, the most widely used security protocol on the Internet. Our next contributions are related to capturing a specific property that enables secure outsourcing of cryptographic tasks in partial-decryption. We formally define the property of (non-trivial) public verifiability for general encryption schemes, key encapsulation mechanisms (KEMs), and hybrid encryption schemes, encompassing public-key, identity-based, and tag-based encryption avors. We show that some generic transformations and concrete constructions enjoy this property and then present a new public-key encryption (PKE) scheme having this property and proof of security under the standard assumptions. Finally, we combine puzzles with PKE schemes for enabling delayed decryption in applications such as e-auctions and e-voting. For this we first introduce the notion of effort-release PKE (ER-PKE), encompassing the well-known timedrelease encryption and encapsulated key escrow techniques. We then present a security model for ER-PKE and a generic construction of ER-PKE complying with our security notion.
Resumo:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.
Resumo:
Parliamentary committees fulfil several important functions within the Parliament, with one of these being the oversight of various agencies including those that are designed to reduce corruption within the police service and other public sector agencies. The cross-party nature of committees combined with the protections of Parliament make them powerful agencies. Prenzler & Faulkner (2010) suggest that the ideal system for an agency that has oversight of a public sector integrity commission should include monitoring by a parliamentary committee, with an inspector attached to the committee. This occurs in Queensland, New South Wales and Western Australia. There has been very little research conducted on the role of parliamentary committees with oversight responsibilities for public sector integrity agencies. This paper will address this gap by examining the relationship between a parliamentary committee, a parliamentary inspector and a corruption commission. Queensland’s Parliamentary Crime and Misconduct Committee (PCMC/the Committee) and the Parliamentary Crime and Misconduct Commissioner (the Commissioner) provide oversight of the Crime and Misconduct Commission (CMC). By focussing on the PCMC and the Commissioner, the paper will examine the legislative basis for the Committee and Commissioner and their respective roles in providing oversight of the CMC. One key method by which the PCMC provides oversight of the CMC is to conduct and publish a review of the CMC every three years. Additionally, the paper will identify some of the similarities and differences between the PCMC and other committees that operate within the Queensland Parliament. By doing so, the paper will provide insights into the relationships that exist between corruption commissions, parliamentary committees and parliamentary inspectors and demonstrate the important role of the parliamentary committee in preventing instances of public sector corruption.
Resumo:
The Council of Australian Governments (COAG) in 2003 gave in-principle approval to a best-practice report recommending a holistic approach to managing natural disasters in Australia incorporating a move from a traditional response-centric approach to a greater focus on mitigation, recovery and resilience with community well-being at the core. Since that time, there have been a range of complementary developments that have supported the COAG recommended approach. Developments have been administrative, legislative and technological, both, in reaction to the COAG initiative and resulting from regular natural disasters. This paper reviews the characteristics of the spatial data that is becoming increasingly available at Federal, state and regional jurisdictions with respect to their being fit for the purpose for disaster planning and mitigation and strengthening community resilience. In particular, Queensland foundation spatial data, which is increasingly accessible by the public under the provisions of the Right to Information Act 2009, Information Privacy Act 2009, and recent open data reform initiatives are evaluated. The Fitzroy River catchment and floodplain is used as a case study for the review undertaken. The catchment covers an area of 142,545 km2, the largest river catchment flowing to the eastern coast of Australia. The Fitzroy River basin experienced extensive flooding during the 2010–2011 Queensland floods. The basin is an area of important economic, environmental and heritage values and contains significant infrastructure critical for the mining and agricultural sectors, the two most important economic sectors for Queensland State. Consequently, the spatial datasets for this area play a critical role in disaster management and for protecting critical infrastructure essential for economic and community well-being. The foundation spatial datasets are assessed for disaster planning and mitigation purposes using data quality indicators such as resolution, accuracy, integrity, validity and audit trail.
Resumo:
Purpose – The purpose of this paper is to look at auditor obligations to their clients and potentially to third parties such as investors, with a focus on the quality of financial disclosure in an evolving legal framework. Design/methodology/approach – The article outlines and compares established and emerging trends relative to information disclosure and contractual performance in parallel contexts where information asymmetry exists. In particular, this article considers the disclosure regime that has evolved in the insurance industry to address the substantial imbalance in the level of knowledge possessed by the insured in comparison to the prospective insurer. Abductive reasoning is used to identify causal constructs that explain the data pattern from which the theorised potential for judicial revision of the interpretation of “true and fair” in line with “good faith” in legal regulation is derived. Findings – The authors conclude that there is little doubt that a duty of good faith in relation to auditor-company contractual dealings and potentially a broader good faith duty to third parties such as investors in companies may be on the horizon. Originality/value – In the context of stated objectives by organisations such as the International Federation of Accountants to reconcile ethical and technical skills in the wake of the global financial crisis, there is an increased need to rebuild public and investor confidence in the underpinning integrity of financial reporting. This paper offers a perspective on one way to achieve this by recognising the similarities in the information asymmetry relationships in the insurance industry and how the notion of “good faith” in that relationship could be useful in the audit situation.
Resumo:
The main theme of this thesis is to allow the users of cloud services to outsource their data without the need to trust the cloud provider. The method is based on combining existing proof-of-storage schemes with distance-bounding protocols. Specifically, cloud customers will be able to verify the confidentiality, integrity, availability, fairness (or mutual non-repudiation), data freshness, geographic assurance and replication of their stored data directly, without having to rely on the word of the cloud provider.
Resumo:
During the last several decades, the quality of natural resources and their services have been exposed to significant degradation from increased urban populations combined with the sprawl of settlements, development of transportation networks and industrial activities (Dorsey, 2003; Pauleit et al., 2005). As a result of this environmental degradation, a sustainable framework for urban development is required to provide the resilience of natural resources and ecosystems. Sustainable urban development refers to the management of cities with adequate infrastructure to support the needs of its population for the present and future generations as well as maintain the sustainability of its ecosystems (UNEP/IETC, 2002; Yigitcanlar, 2010). One of the important strategic approaches for planning sustainable cities is „ecological planning‟. Ecological planning is a multi-dimensional concept that aims to preserve biodiversity richness and ecosystem productivity through the sustainable management of natural resources (Barnes et al., 2005). As stated by Baldwin (1985, p.4), ecological planning is the initiation and operation of activities to direct and control the acquisition, transformation, disruption and disposal of resources in a manner capable of sustaining human activities with a minimum disruption of ecosystem processes. Therefore, ecological planning is a powerful method for creating sustainable urban ecosystems. In order to explore the city as an ecosystem and investigate the interaction between the urban ecosystem and human activities, a holistic urban ecosystem sustainability assessment approach is required. Urban ecosystem sustainability assessment serves as a tool that helps policy and decision-makers in improving their actions towards sustainable urban development. There are several methods used in urban ecosystem sustainability assessment among which sustainability indicators and composite indices are the most commonly used tools for assessing the progress towards sustainable land use and urban management. Currently, a variety of composite indices are available to measure the sustainability at the local, national and international levels. However, the main conclusion drawn from the literature review is that they are too broad to be applied to assess local and micro level sustainability and no benchmark value for most of the indicators exists due to limited data availability and non-comparable data across countries. Mayer (2008, p. 280) advocates that by stating "as different as the indices may seem, many of them incorporate the same underlying data because of the small number of available sustainability datasets". Mori and Christodoulou (2011) also argue that this relative evaluation and comparison brings along biased assessments, as data only exists for some entities, which also means excluding many nations from evaluation and comparison. Thus, there is a need for developing an accurate and comprehensive micro-level urban ecosystem sustainability assessment method. In order to develop such a model, it is practical to adopt an approach that uses a method to utilise indicators for collecting data, designate certain threshold values or ranges, perform a comparative sustainability assessment via indices at the micro-level, and aggregate these assessment findings to the local level. Hereby, through this approach and model, it is possible to produce sufficient and reliable data to enable comparison at the local level, and provide useful results to inform the local planning, conservation and development decision-making process to secure sustainable ecosystems and urban futures. To advance research in this area, this study investigated the environmental impacts of an existing urban context by using a composite index with an aim to identify the interaction between urban ecosystems and human activities in the context of environmental sustainability. In this respect, this study developed a new comprehensive urban ecosystem sustainability assessment tool entitled the „Micro-level Urban-ecosystem Sustainability IndeX‟ (MUSIX). The MUSIX model is an indicator-based indexing model that investigates the factors affecting urban sustainability in a local context. The model outputs provide local and micro-level sustainability reporting guidance to help policy-making concerning environmental issues. A multi-method research approach, which is based on both quantitative analysis and qualitative analysis, was employed in the construction of the MUSIX model. First, a qualitative research was conducted through an interpretive and critical literature review in developing a theoretical framework and indicator selection. Afterwards, a quantitative research was conducted through statistical and spatial analyses in data collection, processing and model application. The MUSIX model was tested in four pilot study sites selected from the Gold Coast City, Queensland, Australia. The model results detected the sustainability performance of current urban settings referring to six main issues of urban development: (1) hydrology, (2) ecology, (3) pollution, (4) location, (5) design, and; (6) efficiency. For each category, a set of core indicators was assigned which are intended to: (1) benchmark the current situation, strengths and weaknesses, (2) evaluate the efficiency of implemented plans, and; (3) measure the progress towards sustainable development. While the indicator set of the model provided specific information about the environmental impacts in the area at the parcel scale, the composite index score provided general information about the sustainability of the area at the neighbourhood scale. Finally, in light of the model findings, integrated ecological planning strategies were developed to guide the preparation and assessment of development and local area plans in conjunction with the Gold Coast Planning Scheme, which establishes regulatory provisions to achieve ecological sustainability through the formulation of place codes, development codes, constraint codes and other assessment criteria that provide guidance for best practice development solutions. These relevant strategies can be summarised as follows: • Establishing hydrological conservation through sustainable stormwater management in order to preserve the Earth’s water cycle and aquatic ecosystems; • Providing ecological conservation through sustainable ecosystem management in order to protect biological diversity and maintain the integrity of natural ecosystems; • Improving environmental quality through developing pollution prevention regulations and policies in order to promote high quality water resources, clean air and enhanced ecosystem health; • Creating sustainable mobility and accessibility through designing better local services and walkable neighbourhoods in order to promote safe environments and healthy communities; • Sustainable design of urban environment through climate responsive design in order to increase the efficient use of solar energy to provide thermal comfort, and; • Use of renewable resources through creating efficient communities in order to provide long-term management of natural resources for the sustainability of future generations.
Resumo:
Advances in Information and Communication Technologies have the potential to improve many facets of modern healthcare service delivery. The implementation of electronic health records systems is a critical part of an eHealth system. Despite the potential gains, there are several obstacles that limit the wider development of electronic health record systems. Among these are the perceived threats to the security and privacy of patients’ health data, and a widely held belief that these cannot be adequately addressed. We hypothesise that the major concerns regarding eHealth security and privacy cannot be overcome through the implementation of technology alone. Human dimensions must be considered when analysing the provision of the three fundamental information security goals: confidentiality, integrity and availability. A sociotechnical analysis to establish the information security and privacy requirements when designing and developing a given eHealth system is important and timely. A framework that accommodates consideration of the legislative requirements and human perspectives in addition to the technological measures is useful in developing a measurable and accountable eHealth system. Successful implementation of this approach would enable the possibilities, practicalities and sustainabilities of proposed eHealth systems to be realised.
Resumo:
Reduced SHOX gene expression has been demonstrated to be associated with all skeletal abnormalities in Turner syndrome, other than scoliosis (and kyphosis). There is evidence to suggest that Turner syndrome scoliosis is clinically and radiologically similar to idiopathic scoliosis, although the phenotypes are dissimilar. This pilot gene expression study used relative quantitative real-time PCR (qRT-PCR) of the SHOX (short stature on X) gene to determine whether it is expressed in vertebral body growth plates in idiopathic and congenital scoliosis. After vertebral growth plate dissection, tissue was examined histologically and RNA was extracted and its integrity was assessed using a Bio-Spec Mini, NanoDrop ND-1000 spectrophotometer and standard denaturing gel electrophoresis. Following cDNA synthesis, gene-specific optimization in a Corbett RotorGene 6000 real-time cycler was followed by qRT-PCR of vertebral tissue. Histological examination of vertebral samples confirmed that only growth plate was analyzed for gene expression. Cycling and melt curves were resolved in triplicate for all samples. SHOX abundance was demonstrated in congenital and idiopathic scoliosis vertebral body growth plates. SHOX expression was 11-fold greater in idiopathic compared to congenital (n = 3) scoliosis (p = 0.027). This study confirmed that SHOX was expressed in vertebral body growth plates, which implies that its expression may also be associated with the scoliosis (and kyphosis) of Turner syndrome. SHOX expression is reduced in Turner syndrome (short stature). In this study, increased SHOX expression was demonstrated in idiopathic scoliosis (tall stature) and congenital scoliosis.