964 resultados para Probable Number Technique


Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is significant toxicological evidence of the effects of ultrafine particles (<100nm) on human health (WHO 2005). Studies show that the number concentration of particles has been associated with adverse human health effects (Englert 2004). This work is part of a major study called ‘Ultrafine Particles form Traffic Emissions and Children’s Health’ (UPTECH), which seeks to determine the effect of the exposure to traffic related ultrafine particles on children’s health in schools (http://www.ilaqh.qut.edu.au/Misc/UPT ECH%20Home.htm). Quantification of spatial variation of particle number concentration (PNC) in a microscale environment and identification of the main affecting parameters and their contribution levels are the main aims of this analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The technique of femoral cement-in-cement revision is well established, but there are no previous series reporting its use on the acetabular side at the time of revision total hip arthroplasty. We describe the surgical technique and report the outcome of 60 consecutive cement-in-cement revisions of the acetabular component at a mean follow-up of 8.5 years (range 5-12 years). All had a radiologically and clinically well fixed acetabular cement mantle at the time of revision. 29 patients died. No case was lost to follow-up. The 2 most common indications for acetabular revision were recurrent dislocation (77%) and to compliment a femoral revision (20%). There were 2 cases of aseptic cup loosening (3.3%) requiring re-revision. No other hip was clinically or radiologically loose (96.7%) at latest follow-up. One case was re-revised for infection, 4 for recurrent dislocation and 1 for disarticulation of a constrained component. At 5 years, the Kaplan-Meier survival rate was 100% for aseptic loosening and 92.2% (95% CI; 84.8-99.6%) with revision for all causes as the endpoint. These results support the use of the cement-in-cement revision technique in appropriate cases on the acetabular side. Theoretical advantages include preservation of bone stock, reduced operating time, reduced risk of complications and durable fixation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bananas are one of the world�fs most important crops, serving as a staple food and an important source of income for millions of people in the subtropics. Pests and diseases are a major constraint to banana production. To prevent the spread of pests and disease, farmers are encouraged to use disease�] and insect�]free planting material obtained by micropropagation. This option, however, does not always exclude viruses and concern remains on the quality of planting material. Therefore, there is a demand for effective and reliable virus indexing procedures for tissue culture (TC) material. Reliable diagnostic tests are currently available for all of the economically important viruses of bananas with the exception of Banana streak viruses (BSV, Caulimoviridae, Badnavirus). Development of a reliable diagnostic test for BSV is complicated by the significant serological and genetic variation reported for BSV isolates, and the presence of endogenous BSV (eBSV). Current PCR�] and serological�]based diagnostic methods for BSV may not detect all species of BSV, and PCR�]based methods may give false positives because of the presence of eBSV. Rolling circle amplification (RCA) has been reported as a technique to detect BSV which can also discriminate between episomal and endogenous BSV sequences. However, the method is too expensive for large scale screening of samples in developing countries, and little information is available regarding its sensitivity. Therefore the development of reliable PCR�]based assays is still considered the most appropriate option for large scale screening of banana plants for BSV. This MSc project aimed to refine and optimise the protocols for BSV detection, with a particular focus on developing reliable PCR�]based diagnostics Initially, the appropriateness and reliability of PCR and RCA as diagnostic tests for BSV detection were assessed by testing 45 field samples of banana collected from nine districts in the Eastern region of Uganda in February 2010. This research was also aimed at investigating the diversity of BSV in eastern Uganda, identifying the BSV species present and characterising any new BSV species. Out of the 45 samples tested, 38 and 40 samples were considered positive by PCR and RCA, respectively. Six different species of BSV, namely Banana streak IM virus (BSIMV), Banana streak MY virus (BSMYV), Banana streak OL virus (BSOLV), Banana streak UA virus (BSUAV), Banana streak UL virus (BSULV), Banana streak UM virus (BSUMV), were detected by PCR and confirmed by RCA and sequencing. No new species were detected, but this was the first report of BSMYV in Uganda. Although RCA was demonstrated to be suitable for broad�]range detection of BSV, it proved time�]consuming and laborious for identification in field samples. Due to the disadvantages associated with RCA, attempts were made to develop a reliable PCR�]based assay for the specific detection of episomal BSOLV, Banana streak GF virus (BSGFV), BSMYV and BSIMV. For BSOLV and BSGFV, the integrated sequences exist in rearranged, repeated and partially inverted portions at their site of integration. Therefore, for these two viruses, primers sets were designed by mapping previously published sequences of their endogenous counterparts onto published sequences of the episomal genomes. For BSOLV, two primer sets were designed while, for BSGFV, a single primer set was designed. The episomalspecificity of these primer sets was assessed by testing 106 plant samples collected during surveys in Kenya and Uganda, and 33 leaf samples from a wide range of banana cultivars maintained in TC at the Maroochy Research Station of the Department of Employment, Economic Development and Innovation (DEEDI), Queensland. All of these samples had previously been tested for episomal BSV by RCA and for both BSOLV and BSGFV by PCR using published primer sets. The outcome from these analyses was that the newly designed primer sets for BSOLV and BSGFV were able to distinguish between episomal BSV and eBSV in most cultivars with some B�]genome component. In some samples, however, amplification was observed using the putative episomal�]specific primer sets where episomal BSV was not identified using RCA. This may reflect a difference in the sensitivity of PCR compared to RCA, or possibly the presence of an eBSV sequence of different conformation. Since the sequences of the respective eBSV for BSMYV and BSIMV in the M. balbisiana genome are not available, a series of random primer combinations were tested in an attempt to find potential episomal�]specific primer sets for BSMYV and BSIMV. Of an initial 20 primer combinations screened for BSMYV detection on a small number of control samples, 11 primers sets appeared to be episomal�]specific. However, subsequent testing of two of these primer combinations on a larger number of control samples resulted in some inconsistent results which will require further investigation. Testing of the 25 primer combinations for episomal�]specific detection of BSIMV on a number of control samples showed that none were able to discriminate between episomal and endogenous BSIMV. The final component of this research project was the development of an infectious clone of a BSV endemic in Australia, namely BSMYV. This was considered important to enable the generation of large amounts of diseased plant material needed for further research. A terminally redundant fragment (.1.3 �~ BSMYV genome) was cloned and transformed into Agrobacterium tumefaciens strain AGL1, and used to inoculate 12 healthy banana plants of the cultivars Cavendish (Williams) by three different methods. At 12 weeks post�]inoculation, (i) four of the five banana plants inoculated by corm injection showed characteristic BSV symptoms while the remaining plant was wilting/dying, (ii) three of the five banana plants inoculated by needle�]pricking of the stem showed BSV symptoms, one plant was symptomless while the remaining had died and (iii) both banana plants inoculated by leaf infiltration were symptomless. When banana leaf samples were tested for BSMYV by PCR and RCA, BSMYV was confirmed in all banana plants showing symptoms including those were wilting and/or dying. The results from this research have provided several avenues for further research. By completely sequencing all variants of eBSOLV and eBSGFV and fully sequencing the eBSIMV and eBSMYV regions, episomal BSV�]specific primer sets for all eBSVs could potentially be designed that could avoid all integrants of that particular BSV species. Furthermore, the development of an infectious BSV clone will enable large numbers of BSVinfected plants to be generated for the further testing of the sensitivity of RCA compared to other more established assays such as PCR. The development of infectious clones also opens the possibility for virus induced gene silencing studies in banana.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motor unit number estimation (MUNE) is a method which aims to provide a quantitative indicator of progression of diseases that lead to loss of motor units, such as motor neurone disease. However the development of a reliable, repeatable and fast real-time MUNE method has proved elusive hitherto. Ridall et al. (2007) implement a reversible jump Markov chain Monte Carlo (RJMCMC) algorithm to produce a posterior distribution for the number of motor units using a Bayesian hierarchical model that takes into account biological information about motor unit activation. However we find that the approach can be unreliable for some datasets since it can suffer from poor cross-dimensional mixing. Here we focus on improved inference by marginalising over latent variables to create the likelihood. In particular we explore how this can improve the RJMCMC mixing and investigate alternative approaches that utilise the likelihood (e.g. DIC (Spiegelhalter et al., 2002)). For this model the marginalisation is over latent variables which, for a larger number of motor units, is an intractable summation over all combinations of a set of latent binary variables whose joint sample space increases exponentially with the number of motor units. We provide a tractable and accurate approximation for this quantity and also investigate simulation approaches incorporated into RJMCMC using results of Andrieu and Roberts (2009).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reliable ambiguity resolution (AR) is essential to Real-Time Kinematic (RTK) positioning and its applications, since incorrect ambiguity fixing can lead to largely biased positioning solutions. A partial ambiguity fixing technique is developed to improve the reliability of AR, involving partial ambiguity decorrelation (PAD) and partial ambiguity resolution (PAR). Decorrelation transformation could substantially amplify the biases in the phase measurements. The purpose of PAD is to find the optimum trade-off between decorrelation and worst-case bias amplification. The concept of PAR refers to the case where only a subset of the ambiguities can be fixed correctly to their integers in the integer least-squares (ILS) estimation system at high success rates. As a result, RTK solutions can be derived from these integer-fixed phase measurements. This is meaningful provided that the number of reliably resolved phase measurements is sufficiently large for least-square estimation of RTK solutions as well. Considering the GPS constellation alone, partially fixed measurements are often insufficient for positioning. The AR reliability is usually characterised by the AR success rate. In this contribution an AR validation decision matrix is firstly introduced to understand the impact of success rate. Moreover the AR risk probability is included into a more complete evaluation of the AR reliability. We use 16 ambiguity variance-covariance matrices with different levels of success rate to analyse the relation between success rate and AR risk probability. Next, the paper examines during the PAD process, how a bias in one measurement is propagated and amplified onto many others, leading to more than one wrong integer and to affect the success probability. Furthermore, the paper proposes a partial ambiguity fixing procedure with a predefined success rate criterion and ratio-test in the ambiguity validation process. In this paper, the Galileo constellation data is tested with simulated observations. Numerical results from our experiment clearly demonstrate that only when the computed success rate is very high, the AR validation can provide decisions about the correctness of AR which are close to real world, with both low AR risk and false alarm probabilities. The results also indicate that the PAR procedure can automatically chose adequate number of ambiguities to fix at given high-success rate from the multiple constellations instead of fixing all the ambiguities. This is a benefit that multiple GNSS constellations can offer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An iterative based strategy is proposed for finding the optimal rating and location of fixed and switched capacitors in distribution networks. The substation Load Tap Changer tap is also set during this procedure. A Modified Discrete Particle Swarm Optimization is employed in the proposed strategy. The objective function is composed of the distribution line loss cost and the capacitors investment cost. The line loss is calculated using estimation of the load duration curve to multiple levels. The constraints are the bus voltage and the feeder current which should be maintained within their standard range. For validation of the proposed method, two case studies are tested. The first case study is the semi-urban 37-bus distribution system which is connected at bus 2 of the Roy Billinton Test System which is located in the secondary side of a 33/11 kV distribution substation. The second case is a 33 kV distribution network based on the modification of the 18-bus IEEE distribution system. The results are compared with prior publications to illustrate the accuracy of the proposed strategy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Deciding the appropriate population size and number of is- lands for distributed island-model genetic algorithms is often critical to the algorithm’s success. This paper outlines a method that automatically searches for good combinations of island population sizes and the number of islands. The method is based on a race between competing parameter sets, and collaborative seeding of new parameter sets. This method is applicable to any problem, and makes distributed genetic algorithms easier to use by reducing the number of user-set parameters. The experimental results show that the proposed method robustly and reliably finds population and islands settings that are comparable to those found with traditional trial-and-error approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Use of focus groups as a technique of inquiry is gaining attention in the area of health-care research. This paper will report on the technique of focus group interviewing to investigate the role of the infection control practitioner. Infection control is examined as a specialty area of health-care practice that has received little research attention to date. Additionally, it is an area of practice that is expanding in response to social, economic and microbiological forces. The focus group technique in this study helped a group of infection control practitioners from urban, regional and rural areas throughout Queensland identify and categorise their daily work activities. The outcomes of this process were then analysed to identify the growth in breadth and complexity of the role of the infection control practitioner in the contemporary health-care environment. Findings indicate that the role of the infection control practitioner in Australia has undergone changes consistent with and reflecting changing models of health-care delivery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of Bayesian methodologies for solving optimal experimental design problems has increased. Many of these methods have been found to be computationally intensive for design problems that require a large number of design points. A simulation-based approach that can be used to solve optimal design problems in which one is interested in finding a large number of (near) optimal design points for a small number of design variables is presented. The approach involves the use of lower dimensional parameterisations that consist of a few design variables, which generate multiple design points. Using this approach, one simply has to search over a few design variables, rather than searching over a large number of optimal design points, thus providing substantial computational savings. The methodologies are demonstrated on four applications, including the selection of sampling times for pharmacokinetic and heat transfer studies, and involve nonlinear models. Several Bayesian design criteria are also compared and contrasted, as well as several different lower dimensional parameterisation schemes for generating the many design points.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Systematic studies that evaluate the quality of decision-making processes are relatively rare. Using the literature on decision quality, this research develops a framework to assess the quality of decision-making processes for resolving boundary conflicts in the Philippines. The evaluation framework breaks down the decision-making process into three components (the decision procedure, the decision method, and the decision unit) and is applied to two ex-post (one resolved and one unresolved) and one ex-ante cases. The evaluation results from the resolved and the unresolved cases show that the choice of decision method plays a minor role in resolving boundary conflicts whereas the choice of decision procedure is more influential. In the end, a decision unit can choose a simple method to resolve the conflict. The ex-ante case presents a follow-up intended to resolve the unresolved case for a changing decision-making process in which the associated decision unit plans to apply the spatial multi criteria evaluation (SMCE) tool as a decision method. The evaluation results from the ex-ante case confirm that the SMCE has the potential to enhance the decision quality because: a) it provides high quality as a decision method in this changing process, and b) the weaknesses associated with the decision unit and the decision procedure of the unresolved case were found to be eliminated in this process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Here mixed convection boundary layer flow of a viscous fluid along a heated vertical semi-infinite plate is investigated in a non-absorbing medium. The relationship between convection and thermal radiation is established via boundary condition of second kind on the thermally radiating vertical surface. The governing boundary layer equations are transformed into dimensionless parabolic partial differential equations with the help of appropriate transformations and the resultant system is solved numerically by applying straightforward finite difference method along with Gaussian elimination technique. It is worthy to note that Prandlt number, Pr, is taken to be small (<< 1) which is appropriate for liquid metals. Moreover, the numerical results are demonstrated graphically by showing the effects of important physical parameters, namely, the modified Richardson number (or mixed convection parameter), Ri*, and surface radiation parameter, R, in terms of local skin friction and local Nusselt number coefficients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advances in algorithms for approximate sampling from a multivariable target function have led to solutions to challenging statistical inference problems that would otherwise not be considered by the applied scientist. Such sampling algorithms are particularly relevant to Bayesian statistics, since the target function is the posterior distribution of the unobservables given the observables. In this thesis we develop, adapt and apply Bayesian algorithms, whilst addressing substantive applied problems in biology and medicine as well as other applications. For an increasing number of high-impact research problems, the primary models of interest are often sufficiently complex that the likelihood function is computationally intractable. Rather than discard these models in favour of inferior alternatives, a class of Bayesian "likelihoodfree" techniques (often termed approximate Bayesian computation (ABC)) has emerged in the last few years, which avoids direct likelihood computation through repeated sampling of data from the model and comparing observed and simulated summary statistics. In Part I of this thesis we utilise sequential Monte Carlo (SMC) methodology to develop new algorithms for ABC that are more efficient in terms of the number of model simulations required and are almost black-box since very little algorithmic tuning is required. In addition, we address the issue of deriving appropriate summary statistics to use within ABC via a goodness-of-fit statistic and indirect inference. Another important problem in statistics is the design of experiments. That is, how one should select the values of the controllable variables in order to achieve some design goal. The presences of parameter and/or model uncertainty are computational obstacles when designing experiments but can lead to inefficient designs if not accounted for correctly. The Bayesian framework accommodates such uncertainties in a coherent way. If the amount of uncertainty is substantial, it can be of interest to perform adaptive designs in order to accrue information to make better decisions about future design points. This is of particular interest if the data can be collected sequentially. In a sense, the current posterior distribution becomes the new prior distribution for the next design decision. Part II of this thesis creates new algorithms for Bayesian sequential design to accommodate parameter and model uncertainty using SMC. The algorithms are substantially faster than previous approaches allowing the simulation properties of various design utilities to be investigated in a more timely manner. Furthermore the approach offers convenient estimation of Bayesian utilities and other quantities that are particularly relevant in the presence of model uncertainty. Finally, Part III of this thesis tackles a substantive medical problem. A neurological disorder known as motor neuron disease (MND) progressively causes motor neurons to no longer have the ability to innervate the muscle fibres, causing the muscles to eventually waste away. When this occurs the motor unit effectively ‘dies’. There is no cure for MND, and fatality often results from a lack of muscle strength to breathe. The prognosis for many forms of MND (particularly amyotrophic lateral sclerosis (ALS)) is particularly poor, with patients usually only surviving a small number of years after the initial onset of disease. Measuring the progress of diseases of the motor units, such as ALS, is a challenge for clinical neurologists. Motor unit number estimation (MUNE) is an attempt to directly assess underlying motor unit loss rather than indirect techniques such as muscle strength assessment, which generally is unable to detect progressions due to the body’s natural attempts at compensation. Part III of this thesis builds upon a previous Bayesian technique, which develops a sophisticated statistical model that takes into account physiological information about motor unit activation and various sources of uncertainties. More specifically, we develop a more reliable MUNE method by applying marginalisation over latent variables in order to improve the performance of a previously developed reversible jump Markov chain Monte Carlo sampler. We make other subtle changes to the model and algorithm to improve the robustness of the approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thin-sectioned samples mounted on glass slides with common petrographic epoxies cannot be easily removed (for subsequent ion-milling) by standard methods such as heating or dissolution in solvents. A method for the removal of such samples using a radio frequency (RF) generated oxygen plasma has been investigated for a number of typical petrographic and ceramic thin sections. Sample integrity and thickness were critical factors that determined the etching rate of adhesive and the survivability of the sample. Several tests were performed on a variety of materials in order to estimate possible heating or oxidation damage from the plasma. Temperatures in the plasma chamber remained below 138°C and weight changes in mineral powders etched for 76 hr were less than ±4%. A crystal of optical grade calcite showed no apparent surface damage after 48 hr of etching. Any damage from the oxygen plasma is apparently confined to the surface of the sample, and is removed during the ion-milling stage of transmission electron microscopy (TEM) sample preparation.