331 resultados para Curves, Algebraic.
Resumo:
Goethite and Al-substituted goethite were synthesized and were characterized using XRD and XRF. The kinetic study of goethite dehydrate was investigated by TG and DTG at different heating rates (2, 5, 10, 15, 20 ◦C/min) and the effect of Al substitution for Fe on dehydrate was studied. The results showed that two types of absorbed water with the same Ed values of 3.4, 6.2 kJ/mol were confirmed on goethite and Alsubstituted goethite. Three types of hydroxyl units were proved, one being on the surface and the other two being in the structure of goethite. The substitution of Al for Fe in the structure of goethite decreases the desorption rate of hydroxyl, increases the dehydroxylation temperature, broadens the desorption peaks in DTG curves, and improves the Ed values from 19.4, 20.4, 26.1 kJ/mol to 21.6, 30, 33.6 kJ/mol when Al substitution comes to 9.1%.
Resumo:
This paper provides a commentary on the contribution by Dr Chow who questioned whether the functions of learning are general across all categories of tasks or whether there are some task-particular aspects to the functions of learning in relation to task type. Specifically, they queried whether principles and practice for the acquisition of sport skills are different than what they are for musical, industrial, military and human factors skills. In this commentary we argue that ecological dynamics contains general principles of motor learning that can be instantiated in specific performance contexts to underpin learning design. In this proposal, we highlight the importance of conducting skill acquisition research in sport, rather than relying on empirical outcomes of research from a variety of different performance contexts. Here we discuss how task constraints of different performance contexts (sport, industry, military, music) provide different specific information sources that individuals use to couple their actions when performing and acquiring skills. We conclude by suggesting that his relationship between performance task constraints and learning processes might help explain the traditional emphasis on performance curves and performance outcomes to infer motor learning.
Resumo:
"Seventeen peer-reviewed papers cover the latest research on the ignition and combustion of metals and non-metals, oxygen compatibility of components and systems, analysis of ignition and combustion, failure analysis and safety. It includes aerospace, military, scuba diving, and industrial oxygen applications. Topics cover: • Development of safe oxygen systems • Ignition mechanisms within oxygen systems and how to avoid them • Specific hazards that exist with the oxygen mixture breathed by divers in the scuba industry • Issues related to oxygen system level safety • Issues related to oxygen safety in breathing systems • Detailed investigations and discussions related to the burn curves that have been generated for metals that are burning in a standard test fixture This new publication is a valuable resource for professionals in the air separation industries, oxygen manufacturers, manufacturers of materials intended for oxygen service, and users of oxygen and oxygen-enriched atmospheres, including aerospace, medical, industrial gases, chemical processing, steel and metals refining, as well as to military, commercial or recreational diving."--- publisher website
Resumo:
This paper is concerned with the optimal path planning and initialization interval of one or two UAVs in presence of a constant wind. The method compares previous literature results on synchronization of UAVs along convex curves, path planning and sampling in 2D and extends it to 3D. This method can be applied to observe gas/particle emissions inside a control volume during sampling loops. The flight pattern is composed of two phases: a start-up interval and a sampling interval which is represented by a semi-circular path. The methods were tested in four complex model test cases in 2D and 3D as well as one simulated real world scenario in 2D and one in 3D.
Resumo:
Fire safety of light gauge steel frame (LSF) stud walls is important in the design of buildings. Currently LSF walls are increasingly used in the building industry, and are usually made of cold-formed and thin-walled steel studs that are fire-protected by two layers of plasterboard on both sides. Many experimental and numerical studies have been undertaken to investigate the fire performance of load bearing LSF walls under standard fire conditions. However, the standard time-temperature curve does not represent the fire load present in typical residential and commercial buildings that include considerable amount of thermoplastic materials. Real building fires are unlikely to follow a standard time-temperature curve. However, only limited research has been undertaken to investigate the fire performance of load bearing LSF walls under realistic design fire conditions. Therefore in this research, finite element thermal models of the traditional LSF wall panels without cavity insulation and the new LSF composite wall panels were developed to simulate their fire performance under recently developed realistic design fire curves. Suitable thermal properties were proposed for plasterboards and insulations based on laboratory tests and literature review. The developed models were then validated by comparing their thermal performance results with available results from realistic design fire tests, and were later used in parametric studies. This paper presents the details of the developed finite element thermal models of load bearing LSF wall panels under realistic design fire time-temperature curves and the re-sults. It shows that finite element thermal models can be used to predict the fire performance of load bearing LSF walls with varying configurations of insulations and plasterboards under realistic design fires. Failure times of load bearing LSF walls were also predicted based on the results from finite element thermal analyses.
Resumo:
Two different morphologies of nanotextured molybdenum oxide were deposited by thermal evaporation. By measuring their field emission (FE) properties, an enhancement factor was extracted. Subsequently, these films were coated with a thin layer of Pt to form Schottky contacts. The current-voltage (I-V) characteristics showed low magnitude reverse breakdown voltages, which we attributed to the localized electric field enhancement. An enhancement factor was obtained from the I-V curves. We will show that the enhancement factor extracted from the I-V curves is in good agreement with the enhancement factor extracted from the FE measurements.
Resumo:
A multi-faceted study is conducted with the objective of estimating the potential fiscal savings in annoyance and sleep disturbance related health costs due to providing improved building acoustic design standards. This study uses balcony acoustic treatments in response to road traffic noise as an example. The study area is the State of Queensland in Australia, where regional road traffic noise mapping data is used in conjunction with standard dose–response curves to estimate the population exposure levels. The background and the importance of using the selected road traffic noise indicators are discussed. In order to achieve the objective, correlations between the mapping indicator (LA10 (18 hour)) and the dose response curve indicators (Lden and Lnight) are established via analysis on a large database of road traffic noise measurement data. The existing noise exposure of the study area is used to estimate the fiscal reductions in health related costs through the application of simple estimations of costs per person per year per degree of annoyance or sleep disturbance. The results demonstrate that balcony acoustic treatments may provide a significant benefit towards reducing the health related costs of road traffic noise in a community.
Resumo:
Anisotropic damage distribution and evolution have a profound effect on borehole stress concentrations. Damage evolution is an irreversible process that is not adequately described within classical equilibrium thermodynamics. Therefore, we propose a constitutive model, based on non-equilibrium thermodynamics, that accounts for anisotropic damage distribution, anisotropic damage threshold and anisotropic damage evolution. We implemented this constitutive model numerically, using the finite element method, to calculate stress–strain curves and borehole stresses. The resulting stress–strain curves are distinctively different from linear elastic-brittle and linear elastic-ideal plastic constitutive models and realistically model experimental responses of brittle rocks. We show that the onset of damage evolution leads to an inhomogeneous redistribution of material properties and stresses along the borehole wall. The classical linear elastic-brittle approach to borehole stability analysis systematically overestimates the stress concentrations on the borehole wall, because dissipative strain-softening is underestimated. The proposed damage mechanics approach explicitly models dissipative behaviour and leads to non-conservative mud window estimations. Furthermore, anisotropic rocks with preferential planes of failure, like shales, can be addressed with our model.
Resumo:
The increased popularity of mopeds and motor scooters in Australia and elsewhere in the last decade has contributed substantially to the greater use of powered two-wheelers (PTWs) as a whole. As the exposure of mopeds and scooters has increased, so too has the number of reported crashes involving those PTW types, but there is currently little research comparing the safety of mopeds and, particularly, larger scooters with motorcycles. This study compared the crash risk and crash severity of motorcycles, mopeds and larger scooters in Queensland, Australia. Comprehensive data cleansing was undertaken to separate motorcycles, mopeds and larger scooters in police-reported crash data covering the five years to 30 June 2008. The crash rates of motorcycles (including larger scooters) and mopeds in terms of registered vehicles were similar over this period, although the moped crash rate showed a stronger downward trend. However, the crash rates in terms of distance travelled were nearly four times higher for mopeds than for motorcycles (including larger scooters). More comprehensive distance travelled data is needed to confirm these findings. The overall severity of moped and scooter crashes was significantly lower than motorcycle crashes but an ordered probit regression model showed that crash severity outcomes related to differences in crash characteristics and circumstances, rather than differences between PTW types per se. Greater motorcycle crash severity was associated with higher (>80 km/h) speed zones, horizontal curves, weekend, single vehicle and nighttime crashes. Moped crashes were more severe at night and in speed zones of 90 km/h or more. Larger scooter crashes were more severe in 70 km/h zones (than 60 km/h zones) but not in higher speed zones, and less severe on weekends than on weekdays. The findings can be used to inform potential crash and injury countermeasures tailored to users of different PTW types.
Resumo:
Fire safety design of building structures has received greater attention in recent times due to continuing losses of properties and lives in fires. However, the structural behaviour of thin-walled cold-formed steel columns under fire conditions is not well understood despite the increasing use of light gauge steels in building construction. Cold-formed steel columns are often subject to local buckling effects. Therefore a series of laboratory tests of lipped and unlipped channel columns made of varying steel thicknesses and grades was undertaken at uniform elevated temperatures up to 700°C under steady state conditions. Finite element models of the tested columns were also developed, and their elastic buckling and nonlinear analysis results were compared with test results at elevated temperatures. Effects of the degradation of mechanical properties of steel with temperature were included in the finite element analyses. The use of accurately measured yield stress, elasticity modulus and stress-strain curves at elevated temperatures provided a good comparison of the ultimate loads and load-deflection curves from tests and finite element analyses. The commonly used effective width design rules and the direct strength method at ambient temperature were then used to predict the ultimate loads at elevated temperatures by using the reduced mechanical properties. By comparing these predicted ultimate loads with those from tests and finite element analyses, the accuracy of using this design approach was evaluated.
Resumo:
Traditionally the fire resistance rating of LSF wall systems is based on approximate prescriptive methods developed using limited fire tests. Therefore a detailed research study into the performance of load bearing LSF wall systems under standard fire conditions was undertaken to develop improved fire design rules. It used the extensive fire performance results of eight different LSF wall systems from a series of full scale fire tests and numerical studies for this purpose. The use of previous fire design rules developed for LSF walls subjected to non-uniform elevated temperature distributions based on AISI design manual and Eurocode3 Parts 1.2 and 1.3 was investigated first. New simplified fire design rules based on AS/NZS 4600, North American Specification and Eurocode 3 Part 1.3 were then proposed in this study with suitable allowances for the interaction effects of compression and bending actions. The importance of considering thermal bowing, magnified thermal bowing and neutral axis shift in the fire design was also investigated. A spread sheet based design tool was developed based on the new design rules to predict the failure load ratio versus time and temperature curves for varying LSF wall configurations. The accuracy of the proposed design rules was verified using the test and FEA results for different wall configurations, steel grades, thicknesses and load ratios. This paper presents the details and results of this study including the improved fire design rules for predicting the load capacity of LSF wall studs and the failure times of LSF walls under standard fire conditions.
Resumo:
LiFePO4 is a commercially available battery material with good theoretical discharge capacity, excellent cycle life and increased safety compared with competing Li-ion chemistries. It has been the focus of considerable experimental and theoretical scrutiny in the past decade, resulting in LiFePO4 cathodes that perform well at high discharge rates. This scrutiny has raised several questions about the behaviour of LiFePO4 material during charge and discharge. In contrast to many other battery chemistries that intercalate homogeneously, LiFePO4 can phase-separate into highly and lowly lithiated phases, with intercalation proceeding by advancing an interface between these two phases. The main objective of this thesis is to construct mathematical models of LiFePO4 cathodes that can be validated against experimental discharge curves. This is in an attempt to understand some of the multi-scale dynamics of LiFePO4 cathodes that can be difficult to determine experimentally. The first section of this thesis constructs a three-scale mathematical model of LiFePO4 cathodes that uses a simple Stefan problem (which has been used previously in the literature) to describe the assumed phase-change. LiFePO4 crystals have been observed agglomerating in cathodes to form a porous collection of crystals and this morphology motivates the use of three size-scales in the model. The multi-scale model developed validates well against experimental data and this validated model is then used to examine the role of manufacturing parameters (including the agglomerate radius) on battery performance. The remainder of the thesis is concerned with investigating phase-field models as a replacement for the aforementioned Stefan problem. Phase-field models have recently been used in LiFePO4 and are a far more accurate representation of experimentally observed crystal-scale behaviour. They are based around the Cahn-Hilliard-reaction (CHR) IBVP, a fourth-order PDE with electrochemical (flux) boundary conditions that is very stiff and possesses multiple time and space scales. Numerical solutions to the CHR IBVP can be difficult to compute and hence a least-squares based Finite Volume Method (FVM) is developed for discretising both the full CHR IBVP and the more traditional Cahn-Hilliard IBVP. Phase-field models are subject to two main physicality constraints and the numerical scheme presented performs well under these constraints. This least-squares based FVM is then used to simulate the discharge of individual crystals of LiFePO4 in two dimensions. This discharge is subject to isotropic Li+ diffusion, based on experimental evidence that suggests the normally orthotropic transport of Li+ in LiFePO4 may become more isotropic in the presence of lattice defects. Numerical investigation shows that two-dimensional Li+ transport results in crystals that phase-separate, even at very high discharge rates. This is very different from results shown in the literature, where phase-separation in LiFePO4 crystals is suppressed during discharge with orthotropic Li+ transport. Finally, the three-scale cathodic model used at the beginning of the thesis is modified to simulate modern, high-rate LiFePO4 cathodes. High-rate cathodes typically do not contain (large) agglomerates and therefore a two-scale model is developed. The Stefan problem used previously is also replaced with the phase-field models examined in earlier chapters. The results from this model are then compared with experimental data and fit poorly, though a significant parameter regime could not be investigated numerically. Many-particle effects however, are evident in the simulated discharges, which match the conclusions of recent literature. These effects result in crystals that are subject to local currents very different from the discharge rate applied to the cathode, which impacts the phase-separating behaviour of the crystals and raises questions about the validity of using cathodic-scale experimental measurements in order to determine crystal-scale behaviour.
Resumo:
Authenticated Encryption (AE) is the cryptographic process of providing simultaneous confidentiality and integrity protection to messages. This approach is more efficient than applying a two-step process of providing confidentiality for a message by encrypting the message, and in a separate pass providing integrity protection by generating a Message Authentication Code (MAC). AE using symmetric ciphers can be provided by either stream ciphers with built in authentication mechanisms or block ciphers using appropriate modes of operation. However, stream ciphers have the potential for higher performance and smaller footprint in hardware and/or software than block ciphers. This property makes stream ciphers suitable for resource constrained environments, where storage and computational power are limited. There have been several recent stream cipher proposals that claim to provide AE. These ciphers can be analysed using existing techniques that consider confidentiality or integrity separately; however currently there is no existing framework for the analysis of AE stream ciphers that analyses these two properties simultaneously. This thesis introduces a novel framework for the analysis of AE using stream cipher algorithms. This thesis analyzes the mechanisms for providing confidentiality and for providing integrity in AE algorithms using stream ciphers. There is a greater emphasis on the analysis of the integrity mechanisms, as there is little in the public literature on this, in the context of authenticated encryption. The thesis has four main contributions as follows. The first contribution is the design of a framework that can be used to classify AE stream ciphers based on three characteristics. The first classification applies Bellare and Namprempre's work on the the order in which encryption and authentication processes take place. The second classification is based on the method used for accumulating the input message (either directly or indirectly) into the into the internal states of the cipher to generate a MAC. The third classification is based on whether the sequence that is used to provide encryption and authentication is generated using a single key and initial vector, or two keys and two initial vectors. The second contribution is the application of an existing algebraic method to analyse the confidentiality algorithms of two AE stream ciphers; namely SSS and ZUC. The algebraic method is based on considering the nonlinear filter (NLF) of these ciphers as a combiner with memory. This method enables us to construct equations for the NLF that relate the (inputs, outputs and memory of the combiner) to the output keystream. We show that both of these ciphers are secure from this type of algebraic attack. We conclude that using a keydependent SBox in the NLF twice, and using two different SBoxes in the NLF of ZUC, prevents this type of algebraic attack. The third contribution is a new general matrix based model for MAC generation where the input message is injected directly into the internal state. This model describes the accumulation process when the input message is injected directly into the internal state of a nonlinear filter generator. We show that three recently proposed AE stream ciphers can be considered as instances of this model; namely SSS, NLSv2 and SOBER-128. Our model is more general than a previous investigations into direct injection. Possible forgery attacks against this model are investigated. It is shown that using a nonlinear filter in the accumulation process of the input message when either the input message or the initial states of the register is unknown prevents forgery attacks based on collisions. The last contribution is a new general matrix based model for MAC generation where the input message is injected indirectly into the internal state. This model uses the input message as a controller to accumulate a keystream sequence into an accumulation register. We show that three current AE stream ciphers can be considered as instances of this model; namely ZUC, Grain-128a and Sfinks. We establish the conditions under which the model is susceptible to forgery and side-channel attacks.
Resumo:
Two varieties of grapes, white grape and red grape grown in the Campania region of Italy were selected for the study of drying characteristics. Comparisons were made with treated and untreated grapes under constant drying condition of 50o C in a conventional drying system. This temperature was selected to represent farm drying conditions. Grapes were purchased from a local market from the same supplier to maintain the same size of grapes and same properties. An abrasive physical treatment was used as pretreatment. The drying curves were constructed and drying kinetics was calculated using several commonly available models. It was found that treated samples show better drying characteristics than untreated samples. The objective of this study is to obtain drying kinetics which can be used to optimize the drying operations in grape drying.
Resumo:
The objective of this PhD research program is to investigate numerical methods for simulating variably-saturated flow and sea water intrusion in coastal aquifers in a high-performance computing environment. The work is divided into three overlapping tasks: to develop an accurate and stable finite volume discretisation and numerical solution strategy for the variably-saturated flow and salt transport equations; to implement the chosen approach in a high performance computing environment that may have multiple GPUs or CPU cores; and to verify and test the implementation. The geological description of aquifers is often complex, with porous materials possessing highly variable properties, that are best described using unstructured meshes. The finite volume method is a popular method for the solution of the conservation laws that describe sea water intrusion, and is well-suited to unstructured meshes. In this work we apply a control volume-finite element (CV-FE) method to an extension of a recently proposed formulation (Kees and Miller, 2002) for variably saturated groundwater flow. The CV-FE method evaluates fluxes at points where material properties and gradients in pressure and concentration are consistently defined, making it both suitable for heterogeneous media and mass conservative. Using the method of lines, the CV-FE discretisation gives a set of differential algebraic equations (DAEs) amenable to solution using higher-order implicit solvers. Heterogeneous computer systems that use a combination of computational hardware such as CPUs and GPUs, are attractive for scientific computing due to the potential advantages offered by GPUs for accelerating data-parallel operations. We present a C++ library that implements data-parallel methods on both CPU and GPUs. The finite volume discretisation is expressed in terms of these data-parallel operations, which gives an efficient implementation of the nonlinear residual function. This makes the implicit solution of the DAE system possible on the GPU, because the inexact Newton-Krylov method used by the implicit time stepping scheme can approximate the action of a matrix on a vector using residual evaluations. We also propose preconditioning strategies that are amenable to GPU implementation, so that all computationally-intensive aspects of the implicit time stepping scheme are implemented on the GPU. Results are presented that demonstrate the efficiency and accuracy of the proposed numeric methods and formulation. The formulation offers excellent conservation of mass, and higher-order temporal integration increases both numeric efficiency and accuracy of the solutions. Flux limiting produces accurate, oscillation-free solutions on coarse meshes, where much finer meshes are required to obtain solutions with equivalent accuracy using upstream weighting. The computational efficiency of the software is investigated using CPUs and GPUs on a high-performance workstation. The GPU version offers considerable speedup over the CPU version, with one GPU giving speedup factor of 3 over the eight-core CPU implementation.