906 resultados para Methods of Encryption
Resumo:
Perez-Losada et al. [1] analyzed 72 complete genomes corresponding to nine mammalian (67 strains) and 2 avian (5 strains) polyomavirus species using maximum likelihood and Bayesian methods of phylogenetic inference. Because some data of 2 genomes in their work are now not available in GenBank, in this work, we analyze the phylogenetic relationship of the remaining 70 complete genomes corresponding to nine mammalian (65 strains) and two avian (5 strains) polyomavirus species using a dynamical language model approach developed by our group (Yu et al., [26]). This distance method does not require sequence alignment for deriving species phylogeny based on overall similarities of the complete genomes. Our best tree separates the bird polyomaviruses (avian polyomaviruses and goose hemorrhagic polymaviruses) from the mammalian polyomaviruses, which supports the idea of splitting the genus into two subgenera. Such a split is consistent with the different viral life strategies of each group. In the mammalian polyomavirus subgenera, mouse polyomaviruses (MPV), simian viruses 40 (SV40), BK viruses (BKV) and JC viruses (JCV) are grouped as different branches as expected. The topology of our best tree is quite similar to that of the tree constructed by Perez-Losada et al.
Resumo:
Information Retrieval is an important albeit imperfect component of information technologies. A problem of insufficient diversity of retrieved documents is one of the primary issues studied in this research. This study shows that this problem leads to a decrease of precision and recall, traditional measures of information retrieval effectiveness. This thesis presents an adaptive IR system based on the theory of adaptive dual control. The aim of the approach is the optimization of retrieval precision after all feedback has been issued. This is done by increasing the diversity of retrieved documents. This study shows that the value of recall reflects this diversity. The Probability Ranking Principle is viewed in the literature as the “bedrock” of current probabilistic Information Retrieval theory. Neither the proposed approach nor other methods of diversification of retrieved documents from the literature conform to this principle. This study shows by counterexample that the Probability Ranking Principle does not in general lead to optimal precision in a search session with feedback (for which it may not have been designed but is actively used). Retrieval precision of the search session should be optimized with a multistage stochastic programming model to accomplish the aim. However, such models are computationally intractable. Therefore, approximate linear multistage stochastic programming models are derived in this study, where the multistage improvement of the probability distribution is modelled using the proposed feedback correctness method. The proposed optimization models are based on several assumptions, starting with the assumption that Information Retrieval is conducted in units of topics. The use of clusters is the primary reasons why a new method of probability estimation is proposed. The adaptive dual control of topic-based IR system was evaluated in a series of experiments conducted on the Reuters, Wikipedia and TREC collections of documents. The Wikipedia experiment revealed that the dual control feedback mechanism improves precision and S-recall when all the underlying assumptions are satisfied. In the TREC experiment, this feedback mechanism was compared to a state-of-the-art adaptive IR system based on BM-25 term weighting and the Rocchio relevance feedback algorithm. The baseline system exhibited better effectiveness than the cluster-based optimization model of ADTIR. The main reason for this was insufficient quality of the generated clusters in the TREC collection that violated the underlying assumption.
Resumo:
While my PhD is practice-led research, it is my contention that such an inquiry cannot develop as long as it tries to emulate other models of research. I assert that practice-led research needs to account for an epistemological unknown or uncertainty central to the practice of art. By focusing on what I call the artist's 'voice,' I will show how this 'voice' is comprised of a dual motivation—'articulate' representation and 'inarticulate' affect—which do not even necessarily derive from the artist. Through an analysis of art-historical precedents, critical literature (the work of Jean-François Lyotard and Andrew Benjamin, the critical methods of philosophy, phenomenology and psychoanalysis) as well as of my own painting and digital arts practice, I aim to demonstrate how this unknown or uncertain aspect of artistic inquiry can be mapped. It is my contention that practice-led research needs to address and account for this dualistic 'voice' in order to more comprehensively articulate its unique contribution to research culture.
Resumo:
Describes how many of the navigation techniques developed by the robotics research community over the last decade may be applied to a class of underground mining vehicles (LHDs and haul trucks). We review the current state-of-the-art in this area and conclude that there are essentially two basic methods of navigation applicable. We describe an implementation of a reactive navigation system on a 30 tonne LHD which has achieved full-speed operation at a production mine.
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.
Resumo:
Patent systems around the world are being pressed to recognise and protect challengingly new and exciting subject matter in order to keep pace with the rapid technological advancement of our age and the fact we are moving into the era of the ‘knowledge economy’. This rapid development and pressure to expand the bounds of what has traditionally been recognised as patentable subject matter has created uncertainty regarding what it is that the patent system is actually supposed to protect. Among other things, the patent system has had to contend with uncertainty surrounding claims to horticultural and agricultural methods, artificial living micro-organisms, methods of treating the human body, computer software and business methods. The contentious issue of the moment is one at whose heart lies the important distinction between what is a mere abstract idea and what is properly an invention deserving of the monopoly protection afforded by a patent. That question is whether purely intangible inventions, being methods that do not involve a physical aspect or effect or cause a physical transformation of matter, constitute patentable subject matter. This paper goes some way to addressing these uncertainties by considering how the Australian approach to the question can be informed by developments arising in the United States of America, and canvassing some of the possible lessons we in Australia might learn from the approaches taken thus far in the United States.
Resumo:
This thesis is devoted to the study of linear relationships in symmetric block ciphers. A block cipher is designed so that the ciphertext is produced as a nonlinear function of the plaintext and secret master key. However, linear relationships within the cipher can still exist if the texts and components of the cipher are manipulated in a number of ways, as shown in this thesis. There are four main contributions of this thesis. The first contribution is the extension of the applicability of integral attacks from word-based to bitbased block ciphers. Integral attacks exploit the linear relationship between texts at intermediate stages of encryption. This relationship can be used to recover subkey bits in a key recovery attack. In principle, integral attacks can be applied to bit-based block ciphers. However, specific tools to define the attack on these ciphers are not available. This problem is addressed in this thesis by introducing a refined set of notations to describe the attack. The bit patternbased integral attack is successfully demonstrated on reduced-round variants of the block ciphers Noekeon, Present and Serpent. The second contribution is the discovery of a very small system of equations that describe the LEX-AES stream cipher. LEX-AES is based heavily on the 128-bit-key (16-byte) Advanced Encryption Standard (AES) block cipher. In one instance, the system contains 21 equations and 17 unknown bytes. This is very close to the upper limit for an exhaustive key search, which is 16 bytes. One only needs to acquire 36 bytes of keystream to generate the equations. Therefore, the security of this cipher depends on the difficulty of solving this small system of equations. The third contribution is the proposal of an alternative method to measure diffusion in the linear transformation of Substitution-Permutation-Network (SPN) block ciphers. Currently, the branch number is widely used for this purpose. It is useful for estimating the possible success of differential and linear attacks on a particular SPN cipher. However, the measure does not give information on the number of input bits that are left unchanged by the transformation when producing the output bits. The new measure introduced in this thesis is intended to complement the current branch number technique. The measure is based on fixed points and simple linear relationships between the input and output words of the linear transformation. The measure represents the average fraction of input words to a linear diffusion transformation that are not effectively changed by the transformation. This measure is applied to the block ciphers AES, ARIA, Serpent and Present. It is shown that except for Serpent, the linear transformations used in the block ciphers examined do not behave as expected for a random linear transformation. The fourth contribution is the identification of linear paths in the nonlinear round function of the SMS4 block cipher. The SMS4 block cipher is used as a standard in the Chinese Wireless LAN Wired Authentication and Privacy Infrastructure (WAPI) and hence, the round function should exhibit a high level of nonlinearity. However, the findings in this thesis on the existence of linear relationships show that this is not the case. It is shown that in some exceptional cases, the first four rounds of SMS4 are effectively linear. In these cases, the effective number of rounds for SMS4 is reduced by four, from 32 to 28. The findings raise questions about the security provided by SMS4, and might provide clues on the existence of a flaw in the design of the cipher.
Resumo:
During the past decade, a significant amount of research has been conducted internationally with the aim of developing, implementing, and verifying "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures. Application of these methods permits comprehensive assessment of the actual failure modes and ultimate strengths of structural systems in practical design situations, without resort to simplified elastic methods of analysis and semi-empirical specification equations. Advanced analysis has the potential to extend the creativity of structural engineers and simplify the design process, while ensuring greater economy and more uniform safety with respect to the ultimate limit state. The application of advanced analysis methods has previously been restricted to steel frames comprising only members with compact cross-sections that are not subject to the effects of local buckling. This precluded the use of advanced analysis from the design of steel frames comprising a significant proportion of the most commonly used Australian sections, which are non-compact and subject to the effects of local buckling. This thesis contains a detailed description of research conducted over the past three years in an attempt to extend the scope of advanced analysis by developing methods that include the effects of local buckling in a non-linear analysis formulation, suitable for practical design of steel frames comprising non-compact sections. Two alternative concentrated plasticity formulations are presented in this thesis: the refined plastic hinge method and the pseudo plastic zone method. Both methods implicitly account for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. The accuracy and precision of the methods for the analysis of steel frames comprising non-compact sections has been established by comparison with a comprehensive range of analytical benchmark frame solutions. Both the refined plastic hinge and pseudo plastic zone methods are more accurate and precise than the conventional individual member design methods based on elastic analysis and specification equations. For example, the pseudo plastic zone method predicts the ultimate strength of the analytical benchmark frames with an average conservative error of less than one percent, and has an acceptable maximum unconservati_ve error of less than five percent. The pseudo plastic zone model can allow the design capacity to be increased by up to 30 percent for simple frames, mainly due to the consideration of inelastic redistribution. The benefits may be even more significant for complex frames with significant redundancy, which provides greater scope for inelastic redistribution. The analytical benchmark frame solutions were obtained using a distributed plasticity shell finite element model. A detailed description of this model and the results of all the 120 benchmark analyses are provided. The model explicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. Its accuracy was verified by comparison with a variety of analytical solutions and the results of three large-scale experimental tests of steel frames comprising non-compact sections. A description of the experimental method and test results is also provided.
Resumo:
This project set out to investigate the behaviour of a pole frame house subjected to a lateral wind load. The behaviour of poles embedded in the ground was examined. The existing theoretical methods for determining lateral load capacity of an embedded pole were reviewed, and three common methods of pole embedment were tested at different depths to gauge the response of poles and types of pole embedment to a lateral load. The most suitable embedment method was used in the foundation for a full-scale model pole house, which was constructed and tested at various stages during the construction to examine the response of a pole house to lateral wind load. The full scale testing was also used to monitor the effect of the various structural components on the overall stiffuess of the house. The results from the full scale tests were used to calibrate a computer model of a pole house which could then be used to predict the behaviour of different configurations of pole house construction without the need for further expensive full scale tests.
Resumo:
The Inflatable Rescue Boat (IRB) is arguably the most effective rescue tool used by the Australian surf lifesavers. The exceptional features of high mobility and rapid response have enabled it to become an icon on Australia's popular beaches. However, the IRB's extensive use within an environment that is as rugged as it is spectacular, has led it to become a danger to those who risk their lives to save others. Epidemiological research revealed lower limb injuries to be predominant, particularly the right leg. The common types of injuries were fractures and dislocations, as well as muscle or ligament strains and tears. The concern expressed by Surf Life Saving Queensland (SLSQ) and Surf Life Saving Australia (SLSA) led to a biomechanical investigation into this unique and relatively unresearched field. The aim of the research was to identify the causes of injury and propose processes that may reduce the instances and severity of injury to surf lifesavers during IRB operation. Following a review of related research, a design analysis of the craft was undertaken as an introduction to the craft, its design and uses. The mechanical characteristics of the vessel were then evaluated and the accelerations applied to the crew in the IRB were established through field tests. The data were then combined and modelled in the 3-D mathematical modelling and simulation package, MADYMO. A tool was created to compare various scenarios of boat design and methods of operation to determine possible mechanisms to reduce injuries. The results of this study showed that under simulated wave loading the boats flex around a pivot point determined by the position of the hinge in the floorboard. It was also found that the accelerations experienced by the crew exhibited similar characteristics to road vehicle accidents. Staged simulations indicated the attributes of an optimum foam in terms of thickness and density. Likewise, modelling of the boat and crew produced simulations that predicted realistic crew response to tested variables. Unfortunately, the observed lack of adherence to the SLSA footstrap Standard has impeded successful epidemiological and modelling outcomes. If uniformity of boat setup can be assured then epidemiological studies will be able to highlight the influence of implementing changes to the boat design. In conclusion, the research provided a tool to successfully link the epidemiology and injury diagnosis to the mechanical engineering design through the use of biomechanics. This was a novel application of the mathematical modelling software MADYMO. Other craft can also be investigated in this manner to provide solutions to the problem identified and therefore reduce risk of injury for the operators.
Resumo:
Fourier transfonn (FT) Raman, Raman microspectroscopy and Fourier transform infrared (FTIR) spectroscopy have been used for the structural analysis and characterisation of untreated and chemically treated wool fibres. For FT -Raman spectroscopy novel methods of sample presentation have been developed and optimised for the analysis of wool. No significant fluorescence was observed and the spectra could be obtained routinely. The stability of wool keratin to the laser source was investigated and the visual and spectroscopic signs of sample damage were established. Wool keratin was found to be extremely robust with no signs of sample degradation observed for laser powers of up to 600 m W and for exposure times of up to seven and half hours. Due to improvements in band resolution and signal-to-noise ratio, several previously unobserved spectral features have become apparent. The assignment of the Raman active vibrational modes of wool have been reviewed and updated to include these features. The infrared spectroscopic techniques of attenuated total reflectance (ATR) and photoacoustic (P A) have been used to examine shrinkproofed and mothproofed wool samples. Shrinkproofing is an oxidative chemical treatment used to selectively modifY the surface of a wool fibre. Mothproofing is a chemical treatment applied to wool for the prevention of insect attack. The ability of PAS and A TR to vary the penetration depth by varying certain instrumental parameters was used to obtain spectra of the near surface regions of these chemically treated samples. These spectra were compared with those taken with a greater penetration depth, which therefore represent more of the bulk wool sample. The PA and ATR spectra demonstrated that oxidation was restricted to the near-surface layer of wool. Extensive curve fitting of ATR spectra of untreated wool indicated that cuticle was composed of a mixed protein conformation, but was predominately that of an a.-helix. The cortex was proposed to be a mixture of both a.helical and ~-pleated sheet protein conformations. These findings were supported by PAS depth profiling results. Raman microspectroscopy was used in an extensive investigation of the molecular structure of the wool fibre. This included determining the orientation of certain functional groups within the wool fibre and the symmetry of particular vibrations. The orientation ofbonds within the wool fibre was investigated by orientating the wool fibre axis parallel and then perpendicular to the plane of polarisation of the electric vector of the incident radiation. It was experimentally determined that the majority of C=O and N-H bonds of the peptide bond of wool lie parallel to the fibre axis. Additionally, a number of the important vibrations associated with the a-helix were also found to lie parallel to the fibre axis. Further investigation into the molecular structure of wool involved determining what effect stretching the wool fibre had on bond orientation. Raman spectra of stretched and unstretched wool fibres indicated that extension altered the orientation ofthe aromatic rings, the CH2 and CH3 groups of the amino acids. Curve fitting results revealed that extension resulted in significant destruction of the a-helix structure a substantial increase in the P-pleated sheet structure. Finally, depolarisation ratios were calculated for Raman spectra. The vibrations associated with the aromatic rings of amino acids had very low ratios which indicated that the vibrations were highly symmetrical.
Resumo:
Hydrocarbon spills on roads are a major safety concern for the driving public and can have severe cost impacts both on pavement maintenance and to the economy through disruption to services. The time taken to clean-up spills and re-open roads in a safe driving condition is an issue of increasing concern given traffic levels on major urban arterials. Thus, the primary aim of the research was to develop a sorbent material that facilitates rapid clean-up of road spills. The methodology involved extensive research into a range of materials (organic, inorganic and synthetic sorbents), comprehensive testing in the laboratory, scale-up and field, and product design (i.e. concept to prototype). The study also applied chemometrics to provide consistent, comparative methods of sorbent evaluation and performance. In addition, sorbent materials at every stage were compared against a commercial benchmark. For the first time, the impact of diesel on asphalt pavement has been quantified and assessed in a systematic way. Contrary to conventional thinking and anecdotal observations, the study determined that the action of diesel on asphalt was quite rapid (i.e. hours rather than weeks or months). This significant finding demonstrates the need to minimise the impact of hydrocarbon spills and the potential application of the sorbent option. To better understand the adsorption phenomenon, surface characterisation techniques were applied to selected sorbent materials (i.e. sand, organo-clay and cotton fibre). Brunauer Emmett Teller (BET) and thermal analysis indicated that the main adsorption mechanism for the sorbents occurred on the external surface of the material in the diffusion region (sand and organo-clay) and/or capillaries (cotton fibre). Using environmental scanning electron microscopy (ESEM), it was observed that adsorption by the interfibre capillaries contributed to the high uptake of hydrocarbons by the cotton fibre. Understanding the adsorption mechanism for these sorbents provided some guidance and scientific basis for the selection of materials. The study determined that non-woven cotton mats were ideal sorbent materials for clean-up of hydrocarbon spills. The prototype sorbent was found to perform significantly better than the commercial benchmark, displaying the following key properties: • superior hydrocarbon pick-up from the road pavement; • high hydrocarbon retention capacity under an applied load; • adequate field skid resistance post treatment; • functional and easy to use in the field (e.g. routine handling, transportation, application and recovery); • relatively inexpensive to produce due to the use of raw cotton fibre and simple production process; • environmentally friendly (e.g. renewable materials, non-toxic to environment and operators, and biodegradable); and • rapid response time (e.g. two minutes total clean-up time compared with thirty minutes for reference sorbents). The major outcomes of the research project include: a) development of a specifically designed sorbent material suitable for cleaning up hydrocarbon spills on roads; b) submission of patent application (serial number AU2005905850) for the prototype product; and c) preparation of Commercialisation Strategy to advance the sorbent product to the next phase (i.e. R&D to product commercialisation).
Resumo:
This thesis applies Monte Carlo techniques to the study of X-ray absorptiometric methods of bone mineral measurement. These studies seek to obtain information that can be used in efforts to improve the accuracy of the bone mineral measurements. A Monte Carlo computer code for X-ray photon transport at diagnostic energies has been developed from first principles. This development was undertaken as there was no readily available code which included electron binding energy corrections for incoherent scattering and one of the objectives of the project was to study the effects of inclusion of these corrections in Monte Carlo models. The code includes the main Monte Carlo program plus utilities for dealing with input data. A number of geometrical subroutines which can be used to construct complex geometries have also been written. The accuracy of the Monte Carlo code has been evaluated against the predictions of theory and the results of experiments. The results show a high correlation with theoretical predictions. In comparisons of model results with those of direct experimental measurements, agreement to within the model and experimental variances is obtained. The code is an accurate and valid modelling tool. A study of the significance of inclusion of electron binding energy corrections for incoherent scatter in the Monte Carlo code has been made. The results show this significance to be very dependent upon the type of application. The most significant effect is a reduction of low angle scatter flux for high atomic number scatterers. To effectively apply the Monte Carlo code to the study of bone mineral density measurement by photon absorptiometry the results must be considered in the context of a theoretical framework for the extraction of energy dependent information from planar X-ray beams. Such a theoretical framework is developed and the two-dimensional nature of tissue decomposition based on attenuation measurements alone is explained. This theoretical framework forms the basis for analytical models of bone mineral measurement by dual energy X-ray photon absorptiometry techniques. Monte Carlo models of dual energy X-ray absorptiometry (DEXA) have been established. These models have been used to study the contribution of scattered radiation to the measurements. It has been demonstrated that the measurement geometry has a significant effect upon the scatter contribution to the detected signal. For the geometry of the models studied in this work the scatter has no significant effect upon the results of the measurements. The model has also been used to study a proposed technique which involves dual energy X-ray transmission measurements plus a linear measurement of the distance along the ray path. This is designated as the DPA( +) technique. The addition of the linear measurement enables the tissue decomposition to be extended to three components. Bone mineral, fat and lean soft tissue are the components considered here. The results of the model demonstrate that the measurement of bone mineral using this technique is stable over a wide range of soft tissue compositions and hence would indicate the potential to overcome a major problem of the two component DEXA technique. However, the results also show that the accuracy of the DPA( +) technique is highly dependent upon the composition of the non-mineral components of bone and has poorer precision (approximately twice the coefficient of variation) than the standard DEXA measurements. These factors may limit the usefulness of the technique. These studies illustrate the value of Monte Carlo computer modelling of quantitative X-ray measurement techniques. The Monte Carlo models of bone densitometry measurement have:- 1. demonstrated the significant effects of the measurement geometry upon the contribution of scattered radiation to the measurements, 2. demonstrated that the statistical precision of the proposed DPA( +) three tissue component technique is poorer than that of the standard DEXA two tissue component technique, 3. demonstrated that the proposed DPA(+) technique has difficulty providing accurate simultaneous measurement of body composition in terms of a three component model of fat, lean soft tissue and bone mineral,4. and provided a knowledge base for input to decisions about development (or otherwise) of a physical prototype DPA( +) imaging system. The Monte Carlo computer code, data, utilities and associated models represent a set of significant, accurate and valid modelling tools for quantitative studies of physical problems in the fields of diagnostic radiology and radiography.
Resumo:
In 1984, the International Agency for Research on Cancer determined that working in the primary aluminium production process was associated with exposure to certain polycyclic aromatic hydrocarbons (PAHs) that are probably carcinogenic to humans. Key sources of PAH exposure within the occupational environment of a prebake aluminium smelter are processes associated with use of coal-tar pitch. Despite the potential for exposure via inhalation, ingestion and dermal adsorption, to date occupational exposure limits exist only for airborne contaminants. This study, based at a prebake aluminium smelter in Queensland, Australia, compares exposures of workers who came in contact with PAHs from coal-tar pitch in the smelter’s anode plant (n = 69) and cell-reconstruction area (n = 28), and a non-production control group (n = 17). Literature relevant to PAH exposures in industry and methods of monitoring and assessing occupational hazards associated with these compounds are reviewed, and methods relevant to PAH exposure are discussed in the context of the study site. The study utilises air monitoring of PAHs to quantify exposure via the inhalation route and biological monitoring of 1-hydroxypyrene (1-OHP) in urine of workers to assess total body burden from all routes of entry. Exposures determined for similar exposure groups, sampled over three years, are compared with published occupational PAH exposure limits and/or guidelines. Results of paired personal air monitoring samples and samples collected for 1-OHP in urine monitoring do not correlate. Predictive ability of the benzene-soluble fraction (BSF) in personal air monitoring in relation to the 1-OHP levels in urine is poor (adjusted R2 < 1%) even after adjustment for potential confounders of smoking status and use of personal protective equipment. For static air BSF levels in the anode plant, the median was 0.023 mg/m3 (range 0.002–0.250), almost twice as high as in the cell-reconstruction area (median = 0.013 mg/m3, range 0.003–0.154). In contrast, median BSF personal exposure in the anode plant was 0.036 mg/m3 (range 0.003–0.563), significantly lower than the median measured in the reconstruction area (0.054 mg/m3, range 0.003–0.371) (p = 0.041). The observation that median 1-OHP levels in urine were significantly higher in the anode plant than in the reconstruction area (6.62 µmol/mol creatinine, range 0.09–33.44 and 0.17 µmol/mol creatinine, range 0.001–2.47, respectively) parallels the static air measurements of BSF rather than the personal air monitoring results (p < 0.001). Results of air measurements and biological monitoring show that tasks associated with paste mixing and anode forming in the forming area of the anode plant resulted in higher PAH exposure than tasks in the non-forming areas; median 1-OHP levels in urine from workers in the forming area (14.20 µmol/mol creatinine, range 2.02–33.44) were almost four times higher than those obtained from workers in the non-forming area (4.11 µmol/mol creatinine, range 0.09–26.99; p < 0.001). Results justify use of biological monitoring as an important adjunct to existing measures of PAH exposure in the aluminium industry. Although monitoring of 1-OHP in urine may not be an accurate measure of biological effect on an individual, it is a better indicator of total PAH exposure than BSF in air. In January 2005, interim study results prompted a plant management decision to modify control measures to reduce skin exposure. Comparison of 1-OHP in urine from workers pre- and post-modifications showed substantial downward trends. Exposure via the dermal route was identified as a contributor to overall dose. Reduction in 1-OHP urine concentrations achieved by reducing skin exposure demonstrate the importance of exposure via this alternative pathway. Finally, control measures are recommended to ameliorate risk associated with PAH exposure in the primary aluminium production process, and suggestions for future research include development of methods capable of more specifically monitoring carcinogenic constituents of PAH mixtures, such as benzo[a]pyrene.
Resumo:
Gel dosimeters are of increasing interest in the field of radiation oncology as the only truly three-dimensional integrating radiation dosimeter. There are a range of ferrous-sulphate and polymer gel dosimeters. To be of use, they must be water-equivalent. On their own, this relates to their radiological properties as determined by their composition. In the context of calibration of gel dosimeters, there is the added complexity of the calibration geometry; the presence of containment vessels may influence the dose absorbed. Five such methods of calibration are modelled here using the Monte Carlo method. It is found that the Fricke gel best matches water for most of the calibration methods, and that the best calibration method involves the use of a large tub into which multiple fields of different dose are directed. The least accurate calibration method involves the use of a long test tube along which a depth dose curve yields multiple calibration points.