900 resultados para STOCKINGS, COMPRESSION
Resumo:
Traditionally the fire resistance rating of LSF wall systems is based on approximate prescriptive methods developed using limited fire tests. Therefore a detailed research study into the performance of load bearing LSF wall systems under standard fire conditions was undertaken to develop improved fire design rules. It used the extensive fire performance results of eight different LSF wall systems from a series of full scale fire tests and numerical studies for this purpose. The use of previous fire design rules developed for LSF walls subjected to non-uniform elevated temperature distributions based on AISI design manual and Eurocode3 Parts 1.2 and 1.3 was investigated first. New simplified fire design rules based on AS/NZS 4600, North American Specification and Eurocode 3 Part 1.3 were then proposed in this study with suitable allowances for the interaction effects of compression and bending actions. The importance of considering thermal bowing, magnified thermal bowing and neutral axis shift in the fire design was also investigated. A spread sheet based design tool was developed based on the new design rules to predict the failure load ratio versus time and temperature curves for varying LSF wall configurations. The accuracy of the proposed design rules was verified using the test and FEA results for different wall configurations, steel grades, thicknesses and load ratios. This paper presents the details and results of this study including the improved fire design rules for predicting the load capacity of LSF wall studs and the failure times of LSF walls under standard fire conditions.
Resumo:
Recent fire research into the behaviour of light gauge steel frame (LSF) wall systems has devel-oped fire design rules based on Australian and European cold-formed steel design standards, AS/NZS 4600 and Eurocode 3 Part 1.3. However, these design rules are complex since the LSF wall studs are subjected to non-uniform elevated temperature distributions when the walls are exposed to fire from one side. Therefore this paper proposes an alternative design method for routine predictions of fire resistance rating of LSF walls. In this method, suitable equations are recommended first to predict the idealised stud time-temperature pro-files of eight different LSF wall configurations subject to standard fire conditions based on full scale fire test results. A new set of equations was then proposed to find the critical hot flange (failure) temperature for a giv-en load ratio for the same LSF wall configurations with varying steel grades and thickness. These equations were developed based on detailed finite element analyses that predicted the axial compression capacities and failure times of LSF wall studs subject to non-uniform temperature distributions with varying steel grades and thicknesses. This paper proposes a simple design method in which the two sets of equations developed for time-temperature profiles and critical hot flange temperatures are used to find the failure times of LSF walls. The proposed method was verified by comparing its predictions with the results from full scale fire tests and finite element analyses. This paper presents the details of this study including the finite element models of LSF wall studs, the results from relevant fire tests and finite element analyses, and the proposed equations.
Resumo:
Current design rules for determining the member strength of cold-formed steel columns are based on the effective length of the member and a single column capacity curve for both pin-ended and fixed-ended columns. This research has reviewed the use of AS/NZS 4600 design rules for their accuracy in determining the member compression capacities of slender cold-formed steel columns using detailed numerical studies. It has shown that AS/NZS 4600 design rules accurately predicted the capacities of pinned and fixed ended columns undergoing flexural buckling. However, for fixed ended columns undergoing flexural-torsional buckling, it was found that current AS/NZS 4600 design rules did not include the beneficial effect of warping fixity. Therefore AS/NZS 4600 design rules were found to be excessively conservative and hence uneconomical in predicting the failure loads obtained from tests and finite element analyses of fixed-ended lipped channel columns. Based on this finding, suitable recommendations have been made to modify the current AS/NZS 4600 design rules to more accurately reflect the results obtained from the numerical and experimental studies conducted in this research. This paper presents the details of this research on cold-formed steel columns and the results.
Resumo:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.
Resumo:
In this paper we use the algorithm SeqSLAM to address the question, how little and what quality of visual information is needed to localize along a familiar route? We conduct a comprehensive investigation of place recognition performance on seven datasets while varying image resolution (primarily 1 to 512 pixel images), pixel bit depth, field of view, motion blur, image compression and matching sequence length. Results confirm that place recognition using single images or short image sequences is poor, but improves to match or exceed current benchmarks as the matching sequence length increases. We then present place recognition results from two experiments where low-quality imagery is directly caused by sensor limitations; in one, place recognition is achieved along an unlit mountain road by using noisy, long-exposure blurred images, and in the other, two single pixel light sensors are used to localize in an indoor environment. We also show failure modes caused by pose variance and sequence aliasing, and discuss ways in which they may be overcome. By showing how place recognition along a route is feasible even with severely degraded image sequences, we hope to provoke a re-examination of how we develop and test future localization and mapping systems.
Resumo:
This paper deals with the failure of high adhesive, low compressive strength, thin layered polymer mortar joints in masonry through a contact modelling in finite element framework. Failure due to combined shear, tensile and compressive stresses are considered through a constitutive damaging contact model that incorporates traction–separation as a function of displacement discontinuity. The modelling method is verified using single and multiple contact analyses of thin mortar layered masonry specimens under shear, tensile and compressive stresses and their combinations. Using this verified method, the failure of thin mortar layered masonry under a range of shear to tension ratios and shear to compression ratios has been examined. Finally, this model is applied to thin bed masonry wallettes for their behaviour under biaxial tension–tension and compression–tension loadings perpendicular and parallel to the bed joints.
Resumo:
Finite Element modelling of bone fracture fixation systems allows computational investigation of the deformation response of the bone to load. Once validated, these models can be easily adapted to explore changes in design or configuration of a fixator. The deformation of the tissue within the fracture gap determines its healing and is often summarised as the stiffness of the construct. FE models capable of reproducing this behaviour would provide valuable insight into the healing potential of different fixation systems. Current model validation techniques lack depth in 6D load and deformation measurements. Other aspects of the FE model creation such as the definition of interfaces between components have also not been explored. This project investigated the mechanical testing and FE modelling of a bone– plate construct for the determination of stiffness. In depth 6D measurement and analysis of the generated forces, moments and movements showed large out of plane behaviours which had not previously been characterised. Stiffness calculated from the interfragmentary movement was found to be an unsuitable summary parameter as the error propagation is too large. Current FE modelling techniques were applied in compression and torsion mimicking the experimental setup. Compressive stiffness was well replicated, though torsional stiffness was not. The out of plane behaviours prevalent in the experimental work were not replicated in the model. The interfaces between the components were investigated experimentally and through modification to the FE model. Incorporation of the interface modelling techniques into the full construct models had no effect in compression but did act to reduce torsional stiffness bringing it closer to that of the experiment. The interface definitions had no effect on out of plane behaviours, which were still not replicated. Neither current nor novel FE modelling techniques were able to replicate the out of plane behaviours evident in the experimental work. New techniques for modelling loads and boundary conditions need to be developed to mimic the effects of the entire experimental system.
Resumo:
In South and Southeast Asia, postharvest loss causes material waste of up to 66% in fruits and vegetables, 30% in oilseeds and pulses, and 49% in roots and tubers. The efficiency of postharvest equipment directly affects industrial-scale food production. To enhance current processing methods and devices, it is essential to analyze the responses of food materials under loading operations. Food materials undergo different types of mechanical loading during postharvest and processing stages. Therefore, it is important to determine the properties of these materials under different types of loads, such as tensile, compression, and indentation. This study presents a comprehensive analysis of the available literature on the tensile properties of different food samples. The aim of this review was to categorize the available methods of tensile testing for agricultural crops and food materials to investigate an appropriate sample size and tensile test method. The results were then applied to perform tensile tests on pumpkin flesh and peel samples, in particular on arc-sided samples at a constant loading rate of 20 mm min-1. The results showed the maximum tensile stress of pumpkin flesh and peel samples to be 0.535 and 1.45 MPa, respectively. The elastic modulus of the flesh and peel samples was 6.82 and 25.2 MPa, respectively, while the failure modulus values were 14.51 and 30.88 MPa, respectively. The results of the tensile tests were also used to develop a finite element model of mechanical peeling of tough-skinned vegetables. However, to study the effects of deformation rate, moisture content, and texture of the tissue on the tensile responses of food materials, more investigation needs to be done in the future.
Resumo:
The details of an application of the finite strip method to the elastic buckling analysis of thin-walled structures with various boundary conditions and subjected to single or combined loadings of longitudinal compression, transverse compression, bending and shear are presented. The presence of shear loading is accounted for by modifying the displacement functions which are commonly used in cases when shear is absent. A program based on the finite strip method was used to obtain the elastic buckling stress, buckling plot and buckling mode of thin-walled structures and some of these results are presented.
Resumo:
This thesis is aimed at further understanding the uppermost lipid-filled membranous layer (i.e. surface amorphous layer (SAL)) of articular cartilage and to develop a scientific framework for re-introducing lipids onto the surface of lipid-depleted articular cartilage (i.e. "resurfacing"). The outcome will potentially contribute to knowledge that will facilitate the repair of the articular surface of cartilage where degradation is limited to the loss of the lipids of the SAL only. The surface amorphous layer is of utmost importance to the effective load-spreading, lubrication, and semipermeability (which controls its fluid management, nutrient transport and waste removal) of articular cartilage in the mammalian joints. However, because this uppermost layer of cartilage is often in contact during physiological function, it is prone to wear and tear, and thus, is the site for damage initiation that can lead to the early stages of joint condition like osteoarthritis, and related conditions that cause pain and discomfort leading to low quality of life in patients. It is therefore imperative to conduct a study which offers insight into remedying this problem. It is hypothesized that restoration (resurfacing) of the surface amorphous layer can be achieved by re-introducing synthetic surface-active phospholipids (SAPL) into the joint space. This hypothesis was tested in this thesis by exposing cartilage samples whose surface lipids had been depleted to individual and mixtures of synthetic saturated and unsaturated phospholipids. The surfaces of normal, delipidized, and relipidized samples of cartilage were characterized for their structural integrity and functionality using atomic force microscope (AFM), confocal microscope (COFM), Raman spectroscopy, magnetic resonance imaging (MRI) with image processing in the MATLAB® environment and mechanical loading experiments. The results from AFM imaging, confocal microscopy, and Raman spectroscopy revealed a successful deposition of new surface layer on delipidized cartilage when incubated in synthetic phospholipids. The relipidization resulted in a significant improvement in the surface nanostructure of the artificially degraded cartilage, with the complete SAPL mixture providing better outcomes in comparison to those created with the single SAPL components (palmitoyl-oleoyl-phosphatidylcholine, POPC and dipalmitoyl-phosphatidylcholine, DPPC). MRI analysis revealed that the surface created with the complete mixture of synthetic lipids was capable of providing semipermeability to the surface layer of the treated cartilage samples relative to the normal intact surface. Furthermore, deformation energy analysis revealed that the treated samples were capable of delivering the elastic properties required for load bearing and recovery of the tissue relative to the normal intact samples, with this capability closer between the normal and the samples incubated in the complete lipid mixture. In conclusion, this thesis has established that it is possible to deposit/create a potentially viable layer on the surface of cartilage following degradation/lipid loss through incubation in synthetic lipid solutions. However, further studies will be required to advance the ideas developed in this thesis, for the development of synthetic lipid-based injections/drugs for treatment of osteoarthritis and other related joint conditions.
Resumo:
Scaffolding is an essential issue in tissue engineering and scaffolds should answer certain essential criteria: biocompatibility, high porosity, and important pore interconnectivity to facilitate cell migration and fluid diffusion. In this work, a modified solvent castingparticulate leaching out method is presented to produce scaffolds with spherical and interconnected pores. Sugar particles (200–300 lm and 300–500 lm) were poured through a horizontal Meker burner flame and collected below the flame. While crossing the high temperature zone, the particles melted and adopted a spherical shape. Spherical particles were compressed in plastic mold. Then, poly-L-lactic acid solution was cast in the sugar assembly. After solvent evaporation, the sugar was removed by immersing the structure into distilled water for 3 days. The obtained scaffolds presented highly spherical interconnected pores, with interconnection pathways from 10 to 100 lm. Pore interconnection was obtained without any additional step. Compression tests were carried out to evaluate the scaffold mechanical performances. Moreover, rabbit bone marrow mesenchymal stem cells were found to adhere and to proliferate in vitro in the scaffold over 21 days. This technique produced scaffold with highly spherical and interconnected pores without the use of additional organic solvents to leach out the porogen.
Resumo:
Nanowires (NWs) have attracted appealing and broad application owing to their remarkable mechanical, optical, electrical, thermal and other properties. To unlock the revolutionary characteristics of NWs, a considerable body of experimental and theoretical work has been conducted. However, due to the extremely small dimensions of NWs, the application and manipulation of the in situ experiments involve inherent complexities and huge challenges. For the same reason, the presence of defects appears as one of the most dominant factors in determining their properties. Hence, based on the experiments' deficiency and the necessity of investigating different defects' influence, the numerical simulation or modelling becomes increasingly important in the area of characterizing the properties of NWs. It has been noted that, despite the number of numerical studies of NWs, significant work still lies ahead in terms of problem formulation, interpretation of results, identification and delineation of deformation mechanisms, and constitutive characterization of behaviour. Therefore, the primary aim of this study was to characterize both perfect and defected metal NWs. Large-scale molecular dynamics (MD) simulations were utilized to assess the mechanical properties and deformation mechanisms of different NWs under diverse loading conditions including tension, compression, bending, vibration and torsion. The target samples include different FCC metal NWs (e.g., Cu, Ag, Au NWs), which were either in a perfect crystal structure or constructed with different defects (e.g. pre-existing surface/internal defects, grain/twin boundaries). It has been found from the tensile deformation that Young's modulus was insensitive to different styles of pre-existing defects, whereas the yield strength showed considerable reduction. The deformation mechanisms were found to be greatly influenced by the presence of defects, i.e., different defects acted in the role of dislocation sources, and many affluent deformation mechanisms had been triggered. Similar conclusions were also obtained from the compressive deformation, i.e., Young's modulus was insensitive to different defects, but the critical stress showed evident reduction. Results from the bending deformation revealed that the current modified beam models with the considerations of surface effect, or both surface effect and axial extension effect were still experiencing certain inaccuracy, especially for the NW with ultra small cross-sectional size. Additionally, the flexural rigidity of the NW was found to be insensitive to different pre-existing defects, while the yield strength showed an evident decrease. For the resonance study, the first-order natural frequency of the NW with pre-existing surface defects was almost the same as that from the perfect NW, whereas a lower first-order natural frequency and a significantly degraded quality factor was observed for NWs with grain boundaries. Most importantly, the <110> FCC NWs were found to exhibit a novel beat phenomenon driven by a single actuation, which was resulted from the asymmetry in the lattice spacing in the (110) plane of the NW cross-section, and expected to exert crucial impacts on the in situ nanomechanical measurements. In particular, <110> Ag NWs with rhombic, truncated rhombic, and triangular cross-sections were found to naturally possess two first-mode natural frequencies, which were envisioned with applications in NEMS that could operate in a non-planar regime. The torsion results revealed that the torsional rigidity of the NW was insensitive to the presence of pre-existing defects and twin boundaries, but received evident reduction due to grain boundaries. Meanwhile, the critical angle decreased considerably for defected NWs. This study has provided a comprehensive and deep investigation on the mechanical properties and deformation mechanisms of perfect and defected NWs, which will greatly extend and enhance the existing knowledge and understanding of the properties/performance of NWs, and eventually benefit the realization of their full potential applications. All delineated MD models and theoretical analysis techniques that were established for the target NWs in this research are also applicable to future studies on other kinds of NWs. It has been suggested that MD simulation is an effective and excellent tool, not only for the characterization of the properties of NWs, but also for the prediction of novel or unexpected properties.
Resumo:
Sandwich panels comprising steel facings and a polystyrene foam core are increasingly used as roof and wall claddings in buildings in Australia. When they are subjected to loads causing bending and/or axial compression, the steel plate elements of their profiled facing are susceptible to local buckling. However, when compared to panels with no foam core, they demonstrate significantly improved local buckling behaviour because they are supported by foam. In order to quantify such improvements and to validate the use of available design buckling stress formulae, an investigation using finite element analyses and laboratory experiments was carried out on steel plates that are commonly used in Australia of varying yield stress and thickness supported by a polystyrene foam core. This paper presents the details of this investigation, the buckling results and their comparison with available design buckling formulae.
Resumo:
iTRAQ (isobaric tags for relative or absolute quantitation) is a mass spectrometry technology that allows quantitative comparison of protein abundance by measuring peak intensities of reporter ions released from iTRAQ-tagged peptides by fragmentation during MS/MS. However, current data analysis techniques for iTRAQ struggle to report reliable relative protein abundance estimates and suffer with problems of precision and accuracy. The precision of the data is affected by variance heterogeneity: low signal data have higher relative variability; however, low abundance peptides dominate data sets. Accuracy is compromised as ratios are compressed toward 1, leading to underestimation of the ratio. This study investigated both issues and proposed a methodology that combines the peptide measurements to give a robust protein estimate even when the data for the protein are sparse or at low intensity. Our data indicated that ratio compression arises from contamination during precursor ion selection, which occurs at a consistent proportion within an experiment and thus results in a linear relationship between expected and observed ratios. We proposed that a correction factor can be calculated from spiked proteins at known ratios. Then we demonstrated that variance heterogeneity is present in iTRAQ data sets irrespective of the analytical packages, LC-MS/MS instrumentation, and iTRAQ labeling kit (4-plex or 8-plex) used. We proposed using an additive-multiplicative error model for peak intensities in MS/MS quantitation and demonstrated that a variance-stabilizing normalization is able to address the error structure and stabilize the variance across the entire intensity range. The resulting uniform variance structure simplifies the downstream analysis. Heterogeneity of variance consistent with an additive-multiplicative model has been reported in other MS-based quantitation including fields outside of proteomics; consequently the variance-stabilizing normalization methodology has the potential to increase the capabilities of MS in quantitation across diverse areas of biology and chemistry.
Resumo:
Articular cartilage is the load-bearing tissue that consists of proteoglycan macromolecules entrapped between collagen fibrils in a three-dimensional architecture. To date, the drudgery of searching for mathematical models to represent the biomechanics of such a system continues without providing a fitting description of its functional response to load at micro-scale level. We believe that the major complication arose when cartilage was first envisaged as a multiphasic model with distinguishable components and that quantifying those and searching for the laws that govern their interaction is inadequate. To the thesis of this paper, cartilage as a bulk is as much continuum as is the response of its components to the external stimuli. For this reason, we framed the fundamental question as to what would be the mechano-structural functionality of such a system in the total absence of one of its key constituents-proteoglycans. To answer this, hydrated normal and proteoglycan depleted samples were tested under confined compression while finite element models were reproduced, for the first time, based on the structural microarchitecture of the cross-sectional profile of the matrices. These micro-porous in silico models served as virtual transducers to produce an internal noninvasive probing mechanism beyond experimental capabilities to render the matrices micromechanics and several others properties like permeability, orientation etc. The results demonstrated that load transfer was closely related to the microarchitecture of the hyperelastic models that represent solid skeleton stress and fluid response based on the state of the collagen network with and without the swollen proteoglycans. In other words, the stress gradient during deformation was a function of the structural pattern of the network and acted in concert with the position-dependent compositional state of the matrix. This reveals that the interaction between indistinguishable components in real cartilage is superimposed by its microarchitectural state which directly influences macromechanical behavior.