943 resultados para temperature-programmed techniques
Resumo:
Availability has become a primary goal of information security and is as significant as other goals, in particular, confidentiality and integrity. Maintaining availability of essential services on the public Internet is an increasingly difficult task in the presence of sophisticated attackers. Attackers may abuse limited computational resources of a service provider and thus managing computational costs is a key strategy for achieving the goal of availability. In this thesis we focus on cryptographic approaches for managing computational costs, in particular computational effort. We focus on two cryptographic techniques: computational puzzles in cryptographic protocols and secure outsourcing of cryptographic computations. This thesis contributes to the area of cryptographic protocols in the following ways. First we propose the most efficient puzzle scheme based on modular exponentiations which, unlike previous schemes of the same type, involves only a few modular multiplications for solution verification; our scheme is provably secure. We then introduce a new efficient gradual authentication protocol by integrating a puzzle into a specific signature scheme. Our software implementation results for the new authentication protocol show that our approach is more efficient and effective than the traditional RSA signature-based one and improves the DoSresilience of Secure Socket Layer (SSL) protocol, the most widely used security protocol on the Internet. Our next contributions are related to capturing a specific property that enables secure outsourcing of cryptographic tasks in partial-decryption. We formally define the property of (non-trivial) public verifiability for general encryption schemes, key encapsulation mechanisms (KEMs), and hybrid encryption schemes, encompassing public-key, identity-based, and tag-based encryption avors. We show that some generic transformations and concrete constructions enjoy this property and then present a new public-key encryption (PKE) scheme having this property and proof of security under the standard assumptions. Finally, we combine puzzles with PKE schemes for enabling delayed decryption in applications such as e-auctions and e-voting. For this we first introduce the notion of effort-release PKE (ER-PKE), encompassing the well-known timedrelease encryption and encapsulated key escrow techniques. We then present a security model for ER-PKE and a generic construction of ER-PKE complying with our security notion.
Resumo:
In this paper we report a new neutron Compton scattering (NCS) measurement of the ground state single atom kinetic energy of polycrystalline beryllium at momentum transfers in the range 27}104 As ~1 and temperatures in the range 110}1150 K. The measurements have been made with the electron Volt spectrometer (eVS) at the ISIS facility and the measured kinetic energies are shown to be &10% higher than calculations made in the harmonic approximation.
Resumo:
The presence of insect pests in grain storages throughout the supply chain is a significant problem for farmers, grain handlers, and distributors world-wide. Insect monitoring and sampling programmes are used in the stored grains industry for the detection and estimation of pest populations. At the low pest densities dictated by economic and commercial requirements, the accuracy of both detection and abundance estimates can be influenced by variations in the spatial structure of pest populations over short distances. Geostatistical analysis of Rhyzopertha dominica populations in 2 and 3 dimensions showed that insect numbers were positively correlated over short (0.5 cm) distances, and negatively correlated over longer (.10 cm) distances. At 35 C, insects were located significantly further from the grain surface than at 25 and 30 C. Dispersion metrics showed statistically significant aggregation in all cases. The observed heterogeneous spatial distribution of R. dominica may also be influenced by factors such as the site of initial infestation and disturbance during handling. To account for these additional factors, I significantly extended a simulation model that incorporates both pest growth and movement through a typical stored-grain supply chain. By incorporating the effects of abundance, initial infestation site, grain handling, and treatment on pest spatial distribution, I developed a supply chain model incorporating estimates of pest spatial distribution. This was used to examine several scenarios representative of grain movement through a supply chain, and determine the influence of infestation location and grain disturbance on the sampling intensity required to detect pest infestations at various infestation rates. This study has investigated the effects of temperature, infestation point, and grain handling on the spatial distribution and detection of R. dominica. The proportion of grain infested was found to be dependent upon abundance, initial pest location, and grain handling. Simulation modelling indicated that accounting for these factors when developing sampling strategies for stored grain has the potential to significantly reduce sampling costs while simultaneously improving detection rate, resulting in reduced storage and pest management cost while improving grain quality.
Resumo:
The assessment of skin temperature (Tsk) in athletic therapy and sports medicine research is an extremely important physiological outcome measure.Various methodsof recording Tsk, including thermistors, thermocouples and thermocrons are currently being used for research purposes. These techniques are constrained by their wires limiting the freedom of the subject, slow response times, and/or sensors falling off. Furthermore, as these products typically are directly attached to the skin and cover the measurement site, their validity may be questionable.This manuscript addresses the use and potential benefits of using thermal imaging (TI) in sport medicine research.Non-contact infrared TI offers a quick, non-invasive, portable and athlete-friendly method of assessing Tsk. TI is a useful Tsk diagnostic tool that has potential to be an integral part of sport medicine research in the future. Furthermore, as the technique is non-contact it has several advantages over existing methods of recording skin temperature
Resumo:
The feral pig, Sus scrofa, is a widespread and abundant invasive species in Australia. Feral pigs pose a significant threat to the environment, agricultural industry, and human health, and in far north Queensland they endanger World Heritage values of the Wet Tropics. Historical records document the first introduction of domestic pigs into Australia via European settlers in 1788 and subsequent introductions from Asia from 1827 onwards. Since this time, domestic pigs have been accidentally and deliberately released into the wild and significant feral pig populations have become established, resulting in the declaration of this species as a class 2 pest in Queensland. The overall objective of this study was to assess the population genetic structure of feral pigs in far north Queensland, in particular to enable delineation of demographically independent management units. The identification of ecologically meaningful management units using molecular techniques can assist in targeting feral pig control to bring about effective long-term management. Molecular genetic analysis was undertaken on 434 feral pigs from 35 localities between Tully and Innisfail. Seven polymorphic and unlinked microsatellite loci were screened and fixation indices (FST and analogues) and Bayesian clustering methods were used to identify population structure and management units in the study area. Sequencing of the hyper-variable mitochondrial control region (D-loop) of 35 feral pigs was also examined to identify pig ancestry. Three management units were identified in the study at a scale of 25 to 35 km. Even with the strong pattern of genetic structure identified in the study area, some evidence of long distance dispersal and/or translocation was found as a small number of individuals exhibited ancestry from a management unit outside of which they were sampled. Overall, gene flow in the study area was found to be influenced by environmental features such as topography and land use, but no distinct or obvious natural or anthropogenic geographic barriers were identified. Furthermore, strong evidence was found for non-random mating between pigs of European and Asian breeds indicating that feral pig ancestry influences their population genetic structure. Phylogenetic analysis revealed two distinct mitochondrial DNA clades, representing Asian domestic pig breeds and European breeds. A significant finding was that pigs of Asian origin living in Innisfail and south Tully were not mating randomly with European breed pigs populating the nearby Mission Beach area. Feral pig control should be implemented in each of the management units identified in this study. The control should be coordinated across properties within each management unit to prevent re-colonisation from adjacent localities. The adjacent rainforest and National Park Estates, as well as the rainforest-crop boundary should be included in a simultaneous control operation for greater success.
Resumo:
Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the ‘gold standard’ for predicting dose deposition in the patient [1]. This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per- MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. Acknowledgements: Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. Pinnacle dose parsing made possible with the help of Paul Reich, North Coast Cancer Institute, North Coast, New South Wales.
Resumo:
Dehydration of food materials requires water removal from it. This removal of moisture prevents the growth and reproduction of microorganisms that cause decay and minimizes many of the moisture-driven deterioration reactions (Brennan, 1994). However, during food drying, many other changes occur simultaneously resulting in a modified overall quality (Kompany et al., 1993). Among the physical attributes of dried food material porosity and microstructure are the important ones that can dominant other quality of dried foods (Aguilera et al., 2000). In addition, this two concerned quality attributes affected by process conditions, material components and raw structure of food stuff. In this work, temperature moisture distribution within food materials during microwave drying will be taken into consideration to observe its participation on the microstructure and porosity of the finished product. Apple is the selective materials for this work. Generally, most of the food materials are found in non-uniformed moisture contained condition. To develop non uniform temperature distribution, food materials have been dried in a microwave oven with different power levels (Chua et al., 2000). First of all, temperature and moisture model is simulated by COMSOL Multiphysics. Later on, digital imaging camera and Image Pro Premier software have been deployed to observation moisture distribution and thermal imaging camera for temperature distribution. Finally, Microstructure and porosity of the food materials are obtained from scanning electron microscope and porosity measuring devices respectively . Moisture distribution and temperature during drying influence the microstructure and porosity significantly. Specially, High temperature and moisture contained regions show less porosity and more rupture. These findings support other literatures of Halder et al. (2011) and Rahman et al (1990). On the other hand, low temperature and moisture regions depict uniform microstructure and high porosity. This work therefore assists in better understanding of the role of moisture and temperature distribution to a prediction of micro structure and porosity of dried food materials.
Resumo:
Synthetic goethite and thermally treated goethite at different temperatures were used to remove phosphate from sewage. The effect of annealing temperature on phosphate removal over time was investigated. X-ray diffraction(XRD), transmission electron microscopy (TEM), N2 adsorption and desorption (BET), and infrared emission spectrum (FT-IES) were utilized to characterize the phase, morphology, specific surface area, pore distribution, and the surface groups of samples. The results show that annealed products of goethite at temperatures over 250 °C are hematite with the similar morphology as the original goethite with different hydroxyl groups and surface area. Increasing temperature causes the decrease in hydroxyl groups, consequential increase in surface area at first and then experiences a decrease (14.8–110.4–12.6 m2/g) and the subsequent formation of nanoscale pores. The variation rate of hydroxyl groups and surface area based on FT-IES and BET, respectively, are used to evaluate the effect of annealing temperature on phosphate removal. By using all of the characterization techniques, it is concluded that the changes of phosphate removal basically result from the total variation rate between hydroxyl groups and surface area.
Resumo:
Sol-gel synthesis in varied gravity is only a relatively new topic in the literature and further investigation is required to explore its full potential as a method to synthesise novel materials. Although trialled for systems such as silica, the specific application of varied gravity synthesis to other sol-gel systems such as titanium has not previously been undertaken. Current literature methods for the synthesis of sol-gel material in reduced gravity could not be applied to titanium sol-gel processing, thus a new strategy had to be developed in this study. To successfully conduct experiments in varied gravity a refined titanium sol-gel chemical precursor had to be developed which allowed the single solution precursor to remain un-reactive at temperatures up to 50oC and only begin to react when exposed to a pressure decrease from a vacuum. Due to the new nature of this precursor, a thorough characterisation of the reaction precursors was subsequently undertaken with the use of techniques such as Nuclear Magnetic Resonance, Infra-red and UV-Vis spectroscopy in order to achieve sufficient understanding of precursor chemistry and kinetic stability. This understanding was then used to propose gelation reaction mechanisms under varied gravity conditions. Two unique reactor systems were designed and built with the specific purpose to allow the effects of varied gravity (high, normal, reduced) during synthesis of titanium sol-gels to be studied. The first system was a centrifuge capable of providing high gravity environments of up to 70 g’s for extended periods, whilst applying a 100 mbar vacuum and a temperature of 40-50oC to the reaction chambers. The second system to be used in the QUT Microgravity Drop Tower Facility was also required to provide the same thermal and vacuum conditions used in the centrifuge, but had to operate autonomously during free fall. Through the use of post synthesis characterisation techniques such as Raman Spectroscopy, X-Ray diffraction (XRD) and N2 adsorption, it was found that increased gravity levels during synthesis, had the greatest effect on the final products. Samples produced in reduced and normal gravity appeared to form amorphous gels containing very small particles with moderate surface areas. Whereas crystalline anatase (TiO2), was found to form in samples synthesised above 5 g with significant increases in crystallinity, particle size and surface area observed when samples were produced at gravity levels up to 70 g. It is proposed that for samples produced in higher gravity, an increased concentration gradient of water is forms at the bottom of the reacting film due to forced convection. The particles formed in higher gravity diffuse downward towards this excess of water, which favours the condensation reaction of remaining sol gel precursors with the particles promoting increased particle growth. Due to the removal of downward convection in reduced gravity, particle growth due to condensation reaction processes are physically hindered hydrolysis reactions favoured instead. Another significant finding from this work was that anatase could be produced at relatively low temperatures of 40-50oC instead of the conventional method of calcination above 450oC solely through sol-gel synthesis at higher gravity levels. It is hoped that the outcomes of this research will lead to an increased understanding of the effects of gravity on chemical synthesis of titanium sol-gel, potentially leading to the development of improved products suitable for diverse applications such as semiconductor or catalyst materials as well as significantly reducing production and energy costs through manufacturing these materials at significantly lower temperatures.
Resumo:
The increased adoption of business process management approaches, tools and practices, has led organizations to accumulate large collections of business process models. These collections can easily include hundred to thousand models, especially in the context of multinational corporations or as a result of organizational mergers and acquisitions. A concrete problem is thus how to maintain these large repositories in such a way that their complexity does not hamper their practical usefulness as a means to describe and communicate business operations. This paper proposes a technique to automatically infer suitable names for business process models and fragments thereof. This technique is useful for model abstraction scenarios, as for instance when user-specific views of a repository are required, or as part of a refactoring initiative aimed to simplify the repository’s complexity. The technique is grounded in an adaptation of the theory of meaning to the realm of business process models. We implemented the technique in a prototype tool and conducted an extensive evaluation using three process model collections from practice and a case study involving process modelers with different experience.
Resumo:
In recent times, fire has become a major disaster in buildings due to the increase in fire loads, as a result of modern furniture and light weight construction. This has caused problems for safe evacuation and rescue activities, and in some instances lead to the collapse of buildings (Lewis, 2008 and Nyman, 2002). Recent research has shown that the actual fire resistance of building elements exposed to building fires can be less than their specified fire resistance rating (Lennon and Moore, 2003, Jones, 2002, Nyman, 2002 and Abecassis-Empis et al. 2008). Conventionally the fire rating of building elements is determined using fire tests based on the standard fire time-temperature curve given in ISO 834. This ISO 834 curve was developed in the early 1900s, where wood was the basic fuel source. In reality, modern buildings make use of thermoplastic materials, synthetic foams and fabrics. These materials are high in calorific values and increase both the speed of fire growth and heat release rate, thus increasing the fire severity beyond that of the standard fire curve. Hence it suggests the need to use realistic fire time-temperature curves in tests. Real building fire temperature profiles depend on the fuel load representing the combustible building contents, ventilation openings and thermal properties of wall lining materials. Fuel load is selected based on a review and suitable realistic fire time-temperature curves were developed. Fire tests were then performed for plasterboard lined light gauge steel framed walls for the developed realistic fire curves. This paper presents the details of the development of suitable realistic building fire curves, and the fire tests using them. It describes the fire performance of tested walls in comparison to the standard fire tests and highlights the differences between them. This research has shown the need to use realistic fire exposures in assessing the fire resistance rating of building elements.
Resumo:
Background: In sub-tropical and tropical Queensland, a legacy of poor housing design,minimal building regulations with few compliance measures, an absence of post-construction performance evaluation and various social and market factors has led to a high and growing penetration of, and reliance on, air conditioners to provide thermal comfort for occupants. The pervasive reliance on air conditioners has arguably impacted on building forms, changed cultural expectations of comfort and social practices for achieving comfort, and may have resulted in a loss of skills in designing and constructing high performance building envelopes. Aim: The aim of this paper is to report on initial outcomes of a project that sought to determine how the predicted building thermal performance of twenty-five houses in subtropical and tropical Queensland compared with objective performance measures and comfort performance as perceived by occupants. The purpose of the project was to shed light on the role of various supply chain agents in the realisation of thermal performance outcomes. Methodology: The case study methodology embraced a socio-technical approach incorporating building science and sociology. Building simulation was used to model thermal performance under controlled comfort assumptions and adaptive comfort conditions. Actual indoor climate conditions were measured by temperature and relative humidity sensors placed throughout each house, whilst occupants’ expectations of thermal comfort and their self-reported behaviours were gathered through semi-structured interviews and periodic comfort surveys. Thermal imaging and air infiltration tests, along with building design documents, were analysed to evaluate the influence of various supply chain agents on the actual performance outcomes. Results: The results clearly show that in the housing supply chain – from designer to constructor to occupant – there is limited understanding from each agent of their role in contributing to, or inhibiting, occupants’ comfort.
Resumo:
The techniques of environmental scanning electron microscopy (ESEM) and Raman microscopy have been used to respectively elucidate the morphological changes and nature of the adsorbed species on silver(I) oxide powder, during methanol oxidation conditions. Heating Ag2O in either water vapour or oxygen resulted firstly in the decomposition of silver(I) oxide to polycrystalline silver at 578 K followed by sintering of the particles at higher temperature. Raman spectroscopy revealed the presence of subsurface oxygen and hydroxyl species in addition to surface hydroxyl groups after interaction with water vapour. Similar species were identified following exposure to oxygen in an ambient atmosphere. This behaviour indicated that the polycrystalline silver formed from Ag2O decomposition was substantially more reactive than silver produced by electrochemical methods. The interaction of water at elevated temperatures subsequent to heating silver(I) oxide in oxygen resulted in a significantly enhanced concentration of subsurface hydroxyl species. The reaction of methanol with Ag2O at high temperatures was interesting in that an inhibition in silver grain growth was noted. Substantial structural modification of the silver(I) oxide material was induced by catalytic etching in a methanol/air mixture. In particular, "pin-hole" formation was observed to occur at temperatures in excess of 773 K, and it was also recorded that these "pin- holes" coalesced to form large-scale defects under typical industrial reaction conditions. Raman spectroscopy revealed that the working surface consisted mainly of subsurface oxygen and surface Ag=O species. The relative lack of sub-surface hydroxyl species suggested that it was the desorption of such moieties which was the cause of the "pin-hole" formation.
Resumo:
This thesis introduced Bayesian statistics as an analysis technique to isolate resonant frequency information in in-cylinder pressure signals taken from internal combustion engines. Applications of these techniques are relevant to engine design (performance and noise), energy conservation (fuel consumption) and alternative fuel evaluation. The use of Bayesian statistics, over traditional techniques, allowed for a more in-depth investigation into previously difficult to isolate engine parameters on a cycle-by-cycle basis. Specifically, these techniques facilitated the determination of the start of pre-mixed and diffusion combustion and for the in-cylinder temperature profile to be resolved on individual consecutive engine cycles. Dr Bodisco further showed the utility of the Bayesian analysis techniques by applying them to in-cylinder pressure signals taken from a compression ignition engine run with fumigated ethanol.
Resumo:
Genomic DNA obtained from patient whole blood samples is a key element for genomic research. Advantages and disadvantages, in terms of time-efficiency, cost-effectiveness and laboratory requirements, of procedures available to isolate nucleic acids need to be considered before choosing any particular method. These characteristics have not been fully evaluated for some laboratory techniques, such as the salting out method for DNA extraction, which has been excluded from comparison in different studies published to date. We compared three different protocols (a traditional salting out method, a modified salting out method and a commercially available kit method) to determine the most cost-effective and time-efficient method to extract DNA. We extracted genomic DNA from whole blood samples obtained from breast cancer patient volunteers and compared the results of the product obtained in terms of quantity (concentration of DNA extracted and DNA obtained per ml of blood used) and quality (260/280 ratio and polymerase chain reaction product amplification) of the obtained yield. On average, all three methods showed no statistically significant differences between the final result, but when we accounted for time and cost derived for each method, they showed very significant differences. The modified salting out method resulted in a seven- and twofold reduction in cost compared to the commercial kit and traditional salting out method, respectively and reduced time from 3 days to 1 hour compared to the traditional salting out method. This highlights a modified salting out method as a suitable choice to be used in laboratories and research centres, particularly when dealing with a large number of samples.