234 resultados para tablet compression


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Media and Information Literacy is the focus of several teaching and research projects at Queensland University of Technology and there is particular emphasis placed on digital technologies and how they are used for communication, information use and learning in formal contexts such as schools. Research projects are currently taking place in several locations where investigators are collecting data on approaches to the use of digital media tools like cameras and editing systems, tablet computers and video games. This complements QUT’s teacher preparation courses, including preparation to implement UNESCO’s Online Course in Media and Information Literacy and Intercultural Dialogue in 2013. This work takes place in the context of projects occurring at the National level in Australia that continue to promote Media and Information Literacy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Desalination processes to remove dissolved salts from seawater or brackish water includes common industrial scale processes such as reverse osmosis, thermal processes (i.e. multi-stage flash, multiple-effect distillation) and mechanical vapour compression. These processes are very energy intensive. The Institute for Future Environments (IFE) has evaluated various alternative processes to accomplish desalination using renewable or sustainable energy sources. A new process - a solar, thermally driven distillation system . based on the principles of a solar still – has been examined. This work presents an initial evaluation of the process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Controlled drug delivery is a key topic in modern pharmacotherapy, where controlled drug delivery devices are required to prolong the period of release, maintain a constant release rate, or release the drug with a predetermined release profile. In the pharmaceutical industry, the development process of a controlled drug delivery device may be facilitated enormously by the mathematical modelling of drug release mechanisms, directly decreasing the number of necessary experiments. Such mathematical modelling is difficult because several mechanisms are involved during the drug release process. The main drug release mechanisms of a controlled release device are based on the device’s physiochemical properties, and include diffusion, swelling and erosion. In this thesis, four controlled drug delivery models are investigated. These four models selectively involve the solvent penetration into the polymeric device, the swelling of the polymer, the polymer erosion and the drug diffusion out of the device but all share two common key features. The first is that the solvent penetration into the polymer causes the transition of the polymer from a glassy state into a rubbery state. The interface between the two states of the polymer is modelled as a moving boundary and the speed of this interface is governed by a kinetic law. The second feature is that drug diffusion only happens in the rubbery region of the polymer, with a nonlinear diffusion coefficient which is dependent on the concentration of solvent. These models are analysed by using both formal asymptotics and numerical computation, where front-fixing methods and the method of lines with finite difference approximations are used to solve these models numerically. This numerical scheme is conservative, accurate and easily implemented to the moving boundary problems and is thoroughly explained in Section 3.2. From the small time asymptotic analysis in Sections 5.3.1, 6.3.1 and 7.2.1, these models exhibit the non-Fickian behaviour referred to as Case II diffusion, and an initial constant rate of drug release which is appealing to the pharmaceutical industry because this indicates zeroorder release. The numerical results of the models qualitatively confirms the experimental behaviour identified in the literature. The knowledge obtained from investigating these models can help to develop more complex multi-layered drug delivery devices in order to achieve sophisticated drug release profiles. A multi-layer matrix tablet, which consists of a number of polymer layers designed to provide sustainable and constant drug release or bimodal drug release, is also discussed in this research. The moving boundary problem describing the solvent penetration into the polymer also arises in melting and freezing problems which have been modelled as the classical onephase Stefan problem. The classical one-phase Stefan problem has unrealistic singularities existed in the problem at the complete melting time. Hence we investigate the effect of including the kinetic undercooling to the melting problem and this problem is called the one-phase Stefan problem with kinetic undercooling. Interestingly we discover the unrealistic singularities existed in the classical one-phase Stefan problem at the complete melting time are regularised and also find out the small time behaviour of the one-phase Stefan problem with kinetic undercooling is different to the classical one-phase Stefan problem from the small time asymptotic analysis in Section 3.3. In the case of melting very small particles, it is known that surface tension effects are important. The effect of including the surface tension to the melting problem for nanoparticles (no kinetic undercooling) has been investigated in the past, however the one-phase Stefan problem with surface tension exhibits finite-time blow-up. Therefore we investigate the effect of including both the surface tension and kinetic undercooling to the melting problem for nanoparticles and find out the the solution continues to exist until complete melting. The investigation of including kinetic undercooling and surface tension to the melting problems reveals more insight into the regularisations of unphysical singularities in the classical one-phase Stefan problem. This investigation gives a better understanding of melting a particle, and contributes to the current body of knowledge related to melting and freezing due to heat conduction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We used Magnetic Resonance microimaging (μMRI) to study the compressive behaviour of synthetic elastin. Compression-induced changes in the elastin sample were quantified using longitudinal and transverse spin relaxation rates (R1 and R2, respectively). Spatially-resolved maps of each spin relaxation rate were obtained, allowing the heterogeneous texture of the sample to be observed with and without compression. Compression resulted in an increase of both the mean R1 and the mean R2, but most of this increase was due to sub-locations that exhibited relatively low R1 and R2 in the uncompressed state. This behaviour can be described by differential compression, where local domains in the hydrogel with a relatively low biopolymer content compress more than those with a relatively high biopolymer content.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cold-formed steel lipped channels are commonly used in LSF wall construction as load bearing studs with plasterboards on both sides. Under fire conditions, cold-formed thin-walled steel sections heat up quickly resulting in fast reduction in their strength and stiffness. Usually the LSF wall panels are subjected to fire from one side which will cause thermal bowing, neutral axis shift and magnification effects due to the development of non-uniform temperature distributions across the stud. This will induce an additional bending moment in the stud and hence the studs in LSF wall panels should be designed as a beam column considering both the applied axial compression load and the additional bending moment. Traditionally the fire resistance rating of these wall panels is based on approximate prescriptive methods. Very often they are limited to standard wall configurations used by the industry. Therefore a detailed research study is needed to develop fire design rules to predict the failure load and hence the failure time of LSF wall panels subject to non-uniform temperature distributions. This paper presents the details of an investigation to develop suitable fire design rules for LSF wall studs under non-uniform elevated temperature distributions. Applications of the previously developed fire design rules based on AISI design manual and Eurocode 3 Parts 1.2 and 1.3 to LSF wall studs were investigated in detail and new simplified fire design rules based on AS/NZS 4600 and Eurocode 3 Part 1.3 were proposed in the current study with suitable allowances for the interaction effects of compression and bending actions. The accuracy of the proposed fire design rules was verified by using the results from full scale fire tests and extensive numerical studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Light gauge steel frame wall systems are commonly used in industrial and commercial buildings, and there is a need for simple fire design rules to predict their load capacities and fire resistance ratings. During fire events, the light gauge steel frame wall studs are subjected to non-uniform temperature distributions that cause thermal bowing, neutral axis shift and magnification effects and thus resulting in a combined axial compression and bending action on the studs. In this research, a series of full-scale fire tests was conducted first to evaluate the performance of light gauge steel frame wall systems with eight different wall configurations under standard fire conditions. Finite element models of light gauge steel frame walls were then developed, analysed under transient and steady-state conditions and validated using full-scale fire tests. Using the results from fire tests and finite element analyses, a detailed investigation was undertaken into the prediction of axial compression strength and failure times of light gauge steel frame wall studs in standard fires using the available fire design rules based on Australian, American and European standards. The results from both fire tests and finite element analyses were used to investigate the ability of these fire design rules to include the complex effects of non-uniform temperature distributions and their accuracy in predicting the axial compression strength of wall studs and the failure times. Suitable modifications were then proposed to the fire design rules. This article presents the details of this investigation on the fire design rules of light gauge steel frame walls and the results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditionally the fire resistance rating of LSF wall systems is based on approximate prescriptive methods developed using limited fire tests. Therefore a detailed research study into the performance of load bearing LSF wall systems under standard fire conditions was undertaken to develop improved fire design rules. It used the extensive fire performance results of eight different LSF wall systems from a series of full scale fire tests and numerical studies for this purpose. The use of previous fire design rules developed for LSF walls subjected to non-uniform elevated temperature distributions based on AISI design manual and Eurocode3 Parts 1.2 and 1.3 was investigated first. New simplified fire design rules based on AS/NZS 4600, North American Specification and Eurocode 3 Part 1.3 were then proposed in this study with suitable allowances for the interaction effects of compression and bending actions. The importance of considering thermal bowing, magnified thermal bowing and neutral axis shift in the fire design was also investigated. A spread sheet based design tool was developed based on the new design rules to predict the failure load ratio versus time and temperature curves for varying LSF wall configurations. The accuracy of the proposed design rules was verified using the test and FEA results for different wall configurations, steel grades, thicknesses and load ratios. This paper presents the details and results of this study including the improved fire design rules for predicting the load capacity of LSF wall studs and the failure times of LSF walls under standard fire conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent fire research into the behaviour of light gauge steel frame (LSF) wall systems has devel-oped fire design rules based on Australian and European cold-formed steel design standards, AS/NZS 4600 and Eurocode 3 Part 1.3. However, these design rules are complex since the LSF wall studs are subjected to non-uniform elevated temperature distributions when the walls are exposed to fire from one side. Therefore this paper proposes an alternative design method for routine predictions of fire resistance rating of LSF walls. In this method, suitable equations are recommended first to predict the idealised stud time-temperature pro-files of eight different LSF wall configurations subject to standard fire conditions based on full scale fire test results. A new set of equations was then proposed to find the critical hot flange (failure) temperature for a giv-en load ratio for the same LSF wall configurations with varying steel grades and thickness. These equations were developed based on detailed finite element analyses that predicted the axial compression capacities and failure times of LSF wall studs subject to non-uniform temperature distributions with varying steel grades and thicknesses. This paper proposes a simple design method in which the two sets of equations developed for time-temperature profiles and critical hot flange temperatures are used to find the failure times of LSF walls. The proposed method was verified by comparing its predictions with the results from full scale fire tests and finite element analyses. This paper presents the details of this study including the finite element models of LSF wall studs, the results from relevant fire tests and finite element analyses, and the proposed equations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current design rules for determining the member strength of cold-formed steel columns are based on the effective length of the member and a single column capacity curve for both pin-ended and fixed-ended columns. This research has reviewed the use of AS/NZS 4600 design rules for their accuracy in determining the member compression capacities of slender cold-formed steel columns using detailed numerical studies. It has shown that AS/NZS 4600 design rules accurately predicted the capacities of pinned and fixed ended columns undergoing flexural buckling. However, for fixed ended columns undergoing flexural-torsional buckling, it was found that current AS/NZS 4600 design rules did not include the beneficial effect of warping fixity. Therefore AS/NZS 4600 design rules were found to be excessively conservative and hence uneconomical in predicting the failure loads obtained from tests and finite element analyses of fixed-ended lipped channel columns. Based on this finding, suitable recommendations have been made to modify the current AS/NZS 4600 design rules to more accurately reflect the results obtained from the numerical and experimental studies conducted in this research. This paper presents the details of this research on cold-formed steel columns and the results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This chapter considers the ways in which contemporary children’s literature depicts reading in changing times, with a particular eye on the cultural definitions of ‘reading’ being offered to young people in the age of the tablet computer. A number of picture books, in codex and app form, speak to changing times for reading by their emphasis on the value of books and reading as technologies of literature and of the self. Attending to valuations of literacy and literature within children’s texts provides insight into anxieties about books in the electronic age.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article describes the inception and evolution of Australian football in Melbourne in the mid nineteenth century. It discusses the venues where the game was played, the occupational status of early players, and the inclusion of working class players and spectators. It also refers to the more successful football clubs at that time, such as Melbourne, Carlton, Geelong, South Melbourne, and Essendon.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we use the algorithm SeqSLAM to address the question, how little and what quality of visual information is needed to localize along a familiar route? We conduct a comprehensive investigation of place recognition performance on seven datasets while varying image resolution (primarily 1 to 512 pixel images), pixel bit depth, field of view, motion blur, image compression and matching sequence length. Results confirm that place recognition using single images or short image sequences is poor, but improves to match or exceed current benchmarks as the matching sequence length increases. We then present place recognition results from two experiments where low-quality imagery is directly caused by sensor limitations; in one, place recognition is achieved along an unlit mountain road by using noisy, long-exposure blurred images, and in the other, two single pixel light sensors are used to localize in an indoor environment. We also show failure modes caused by pose variance and sequence aliasing, and discuss ways in which they may be overcome. By showing how place recognition along a route is feasible even with severely degraded image sequences, we hope to provoke a re-examination of how we develop and test future localization and mapping systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper deals with the failure of high adhesive, low compressive strength, thin layered polymer mortar joints in masonry through a contact modelling in finite element framework. Failure due to combined shear, tensile and compressive stresses are considered through a constitutive damaging contact model that incorporates traction–separation as a function of displacement discontinuity. The modelling method is verified using single and multiple contact analyses of thin mortar layered masonry specimens under shear, tensile and compressive stresses and their combinations. Using this verified method, the failure of thin mortar layered masonry under a range of shear to tension ratios and shear to compression ratios has been examined. Finally, this model is applied to thin bed masonry wallettes for their behaviour under biaxial tension–tension and compression–tension loadings perpendicular and parallel to the bed joints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Finite Element modelling of bone fracture fixation systems allows computational investigation of the deformation response of the bone to load. Once validated, these models can be easily adapted to explore changes in design or configuration of a fixator. The deformation of the tissue within the fracture gap determines its healing and is often summarised as the stiffness of the construct. FE models capable of reproducing this behaviour would provide valuable insight into the healing potential of different fixation systems. Current model validation techniques lack depth in 6D load and deformation measurements. Other aspects of the FE model creation such as the definition of interfaces between components have also not been explored. This project investigated the mechanical testing and FE modelling of a bone– plate construct for the determination of stiffness. In depth 6D measurement and analysis of the generated forces, moments and movements showed large out of plane behaviours which had not previously been characterised. Stiffness calculated from the interfragmentary movement was found to be an unsuitable summary parameter as the error propagation is too large. Current FE modelling techniques were applied in compression and torsion mimicking the experimental setup. Compressive stiffness was well replicated, though torsional stiffness was not. The out of plane behaviours prevalent in the experimental work were not replicated in the model. The interfaces between the components were investigated experimentally and through modification to the FE model. Incorporation of the interface modelling techniques into the full construct models had no effect in compression but did act to reduce torsional stiffness bringing it closer to that of the experiment. The interface definitions had no effect on out of plane behaviours, which were still not replicated. Neither current nor novel FE modelling techniques were able to replicate the out of plane behaviours evident in the experimental work. New techniques for modelling loads and boundary conditions need to be developed to mimic the effects of the entire experimental system.