110 resultados para Papanicolaou staining method
em University of Queensland eSpace - Australia
Resumo:
An Escherichia coli cell-free transcription/translation system was used to explore the high-level incorporation Of L-3,4-dihydroxyphenylalanine (DOPA) into proteins by replacing tyrosine with DOPA in the reaction mixtures. ESI-MS showed specific incorporation of DOPA in place of tyrosine. More than 90% DOPA incorporation at each tyrosine site was achieved, allowing the recording of clean N-15-HSQC NMR spectra. A redox-staining method specific for DOPA was shown to provide a sensitive and generally applicable method for assessing the cell-free production of proteins. Of four proteins produced in soluble form in the presence of tyrosine, two resulted in insoluble aggregates in the presence of high levels of DOPA. DOPA has been found in human proteins, often in association with various disease states that implicate protein aggregation and/or misfolding. Our results suggest that misfolded and aggregated proteins may result, in principle, from ribosome-mediated misincorporation of intracellular DOPA accumulated due to oxidative stress. High-yield cell-free protein expression systems are uniquely suited to obtain rapid information on solubility and aggregation of nascent polypeptide chains.
Resumo:
Hereditary nonpolyposis colorectal cancer syndrome (HNPCC) is an autosomal dominant condition accounting for 2–5% of all colorectal carcinomas as well as a small subset of endometrial, upper urinary tract and other gastrointestinal cancers. An assay to detect the underlying defect in HNPCC, inactivation of a DNA mismatch repair enzyme, would be useful in identifying HNPCC probands. Monoclonal antibodies against hMLH1 and hMSH2, two DNA mismatch repair proteins which account for most HNPCC cancers, are commercially available. This study sought to investigate the potential utility of these antibodies in determining the expression status of these proteins in paraffin-embedded formalin-fixed tissue and to identify key technical protocol components associated with successful staining. A set of 20 colorectal carcinoma cases of known hMLH1 and hMSH2 mutation and expression status underwent immunoperoxidase staining at multiple institutions, each of which used their own technical protocol. Staining for hMSH2 was successful in most laboratories while staining for hMLH1 proved problematic in multiple labs. However, a significant minority of laboratories demonstrated excellent results including high discriminatory power with both monoclonal antibodies. These laboratories appropriately identified hMLH1 or hMSH2 inactivation with high sensitivity and specificity. The key protocol point associated with successful staining was an antigen retrieval step involving heat treatment and either EDTA or citrate buffer. This study demonstrates the potential utility of immunohistochemistry in detecting HNPCC probands and identifies key technical components for successful staining.
Resumo:
Pseudo-ternary phase diagrams of the polar lipids Quil A, cholesterol (Chol) and phosphatidylcholine (PC) in aqueous mixtures prepared by the lipid film hydration method (where dried lipid film of phospholipids and cholesterol are hydrated by an aqueous solution of Quil A) were investigated in terms of the types of particulate structures formed therein. Negative staining transmission electron microscopy and polarized light microscopy were used to characterize the colloidal and coarse dispersed particles present in the systems. Pseudo-ternary phase diagrams were established for lipid mixtures hydrated in water and in Tris buffer (pH 7.4). The effect of equilibration time was also studied with respect to systems hydrated in water where the samples were stored for 2 months at 4degreesC. Depending on the mass ratio of Quil A, Chol and PC in the systems, various colloidal particles including ISCOM matrices, liposomes, ring-like micelles and worm-like micelles were observed. Other colloidal particles were also observed as minor structures in the presence of these predominant colloids including helices, layered structures and lamellae (hexagonal pattern of ring-like micelles). In terms of the conditions which appeared to promote the formation of ISCOM matrices, the area of the phase diagrams associated with systems containing these structures increased in the order: hydrated in water/short equilibration period < hydrated in buffer/short equilibration period < hydrated in water/prolonged equilibration period. ISCOM matrices appeared to form over time from samples, which initially contained a high concentration of ring-like micelles suggesting that these colloidal structures may be precursors to ISCOM matrix formation. Helices were also frequently found in samples containing ISCOM matrices as a minor colloidal structure. Equilibration time and presence of buffer salts also promoted the formation of liposomes in systems not containing Quil A. These parameters however, did not appear to significantly affect the occurrence and predominance of other structures present in the pseudo-binary systems containing Quil A. Pseudo-ternary phase diagrams of PC, Chol and Quil A are important to identify combinations which will produce different colloidal structures, particularly ISCOM matrices, by the method of lipid film hydration. Colloidal structures comprising these three components are readily prepared by hydration of dried lipid films and may have application in vaccine delivery where the functionality of ISCOMs has clearly been demonstrated. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
BACKGROUND: Intervention time series analysis (ITSA) is an important method for analysing the effect of sudden events on time series data. ITSA methods are quasi-experimental in nature and the validity of modelling with these methods depends upon assumptions about the timing of the intervention and the response of the process to it. METHOD: This paper describes how to apply ITSA to analyse the impact of unplanned events on time series when the timing of the event is not accurately known, and so the problems of ITSA methods are magnified by uncertainty in the point of onset of the unplanned intervention. RESULTS: The methods are illustrated using the example of the Australian Heroin Shortage of 2001, which provided an opportunity to study the health and social consequences of an abrupt change in heroin availability in an environment of widespread harm reduction measures. CONCLUSION: Application of these methods enables valuable insights about the consequences of unplanned and poorly identified interventions while minimising the risk of spurious results.
Resumo:
The Equilibrium Flux Method [1] is a kinetic theory based finite volume method for calculating the flow of a compressible ideal gas. It is shown here that, in effect, the method solves the Euler equations with added pseudo-dissipative terms and that it is a natural upwinding scheme. The method can be easily modified so that the flow of a chemically reacting gas mixture can be calculated. Results from the method for a one-dimensional non-equilibrium reacting flow are shown to agree well with a conventional continuum solution. Results are also presented for the calculation of a plane two-dimensional flow, at hypersonic speed, of a dissociating gas around a blunt-nosed body.
Resumo:
The level set method has been implemented in a computational volcanology context. New techniques are presented to solve the advection equation and the reinitialisation equation. These techniques are based upon an algorithm developed in the finite difference context, but are modified to take advantage of the robustness of the finite element method. The resulting algorithm is tested on a well documented Rayleigh–Taylor instability benchmark [19], and on an axisymmetric problem where the analytical solution is known. Finally, the algorithm is applied to a basic study of lava dome growth.
Resumo:
Inaccurate species identification confounds insect ecological studies. Examining aspects of Trichogramma ecology pertinent to the novel insect resistance management strategy for future transgenic cotton, Gossypium hirsutum L., production in the Ord River Irrigation Area (ORIA) of Western Australia required accurate differentiation between morphologically similar Trichogramma species. Established molecular diagnostic methods for Trichogramma identification use species-specific sequence difference in the internal transcribed spacer (ITS)-2 chromosomal region; yet, difficulties arise discerning polymerase chain reaction (PCR) fragments of similar base pair length by gel electrophoresis. This necessitates the restriction enzyme digestion of PCR-amplified ITS-2 fragments to readily differentiate Trichogramma australicum Girault and Trichogramma pretiosum Riley. To overcome the time and expense associated with a two-step diagnostic procedure, we developed a “one-step” multiplex PCR technique using species-specific primers designed to the ITS-2 region. This approach allowed for a high-throughput analysis of samples as part of ongoing ecological studies examining Trichogramma biological control potential in the ORIA where these two species occur in sympatry.
Resumo:
A narrow absorption feature in an atomic or molecular gas (such as iodine or methane) is used as the frequency reference in many stabilized lasers. As part of the stabilization scheme an optical frequency dither is applied to the laser. In optical heterodyne experiments, this dither is transferred to the RF beat signal, reducing the spectral power density and hence the signal to noise ratio over that in the absence of dither. We removed the dither by mixing the raw beat signal with a dithered local oscillator signal. When the dither waveform is matched to that of the reference laser the output signal from the mixer is rendered dither free. Application of this method to a Winters iodine-stabilized helium-neon laser reduced the bandwidth of the beat signal from 6 MHz to 390 kHz, thereby lowering the detection threshold from 5 pW of laser power to 3 pW. In addition, a simple signal detection model is developed which predicts similar threshold reductions.
Resumo:
Clifford Geertz was best known for his pioneering excursions into symbolic or interpretive anthropology, especially in relation to Indonesia. Less well recognised are his stimulating explorations of the modern economic history of Indonesia. His thinking on the interplay of economics and culture was most fully and vigorously expounded in Agricultural Involution. That book deployed a succinctly packaged past in order to solve a pressing contemporary puzzle, Java's enduring rural poverty and apparent social immobility. Initially greeted with acclaim, later and ironically the book stimulated the deep and multi-layered research that in fact led to the eventual rejection of Geertz's central contentions. But the veracity or otherwise of Geertz's inventive characterisation of Indonesian economic development now seems irrelevant; what is profoundly important is the extraordinary stimulus he gave to a generation of scholars to explore Indonesia's modern economic history with a depth and intensity previously unimaginable.
Resumo:
In this review we demonstrate how the algebraic Bethe ansatz is used for the calculation of the-energy spectra and form factors (operator matrix elements in the basis of Hamiltonian eigenstates) in exactly solvable quantum systems. As examples we apply the theory to several models of current interest in the study of Bose-Einstein condensates, which have been successfully created using ultracold dilute atomic gases. The first model we introduce describes Josephson tunnelling between two coupled Bose-Einstein condensates. It can be used not only for the study of tunnelling between condensates of atomic gases, but for solid state Josephson junctions and coupled Cooper pair boxes. The theory is also applicable to models of atomic-molecular Bose-Einstein condensates, with two examples given and analysed. Additionally, these same two models are relevant to studies in quantum optics; Finally, we discuss the model of Bardeen, Cooper and Schrieffer in this framework, which is appropriate for systems of ultracold fermionic atomic gases, as well as being applicable for the description of superconducting correlations in metallic grains with nanoscale dimensions.; In applying all the above models to. physical situations, the need for an exact analysis of small-scale systems is established due to large quantum fluctuations which render mean-field approaches inaccurate.
Resumo:
In this paper, we propose a fast adaptive importance sampling method for the efficient simulation of buffer overflow probabilities in queueing networks. The method comprises three stages. First, we estimate the minimum cross-entropy tilting parameter for a small buffer level; next, we use this as a starting value for the estimation of the optimal tilting parameter for the actual (large) buffer level. Finally, the tilting parameter just found is used to estimate the overflow probability of interest. We study various properties of the method in more detail for the M/M/1 queue and conjecture that similar properties also hold for quite general queueing networks. Numerical results support this conjecture and demonstrate the high efficiency of the proposed algorithm.
Resumo:
A modified formula for the integral transform of a nonlinear function is proposed for a class of nonlinear boundary value problems. The technique presented in this paper results in analytical solutions. Iterations and initial guess, which are needed in other techniques, are not required in this novel technique. The analytical solutions are found to agree surprisingly well with the numerically exact solutions for two examples of power law reaction and Langmuir-Hinshelwood reaction in a catalyst pellet.
Resumo:
The Direct Simulation Monte Carlo (DSMC) method is used to simulate the flow of rarefied gases. In the Macroscopic Chemistry Method (MCM) for DSMC, chemical reaction rates calculated from local macroscopic flow properties are enforced in each cell. Unlike the standard total collision energy (TCE) chemistry model for DSMC, the new method is not restricted to an Arrhenius form of the reaction rate coefficient, nor is it restricted to a collision cross-section which yields a simple power-law viscosity. For reaction rates of interest in aerospace applications, chemically reacting collisions are generally infrequent events and, as such, local equilibrium conditions are established before a significant number of chemical reactions occur. Hence, the reaction rates which have been used in MCM have been calculated from the reaction rate data which are expected to be correct only for conditions of thermal equilibrium. Here we consider artificially high reaction rates so that the fraction of reacting collisions is not small and propose a simple method of estimating the rates of chemical reactions which can be used in the Macroscopic Chemistry Method in both equilibrium and non-equilibrium conditions. Two tests are presented: (1) The dissociation rates under conditions of thermal non-equilibrium are determined from a zero-dimensional Monte-Carlo sampling procedure which simulates ‘intra-modal’ non-equilibrium; that is, equilibrium distributions in each of the translational, rotational and vibrational modes but with different temperatures for each mode; (2) The 2-D hypersonic flow of molecular oxygen over a vertical plate at Mach 30 is calculated. In both cases the new method produces results in close agreement with those given by the standard TCE model in the same highly nonequilibrium conditions. We conclude that the general method of estimating the non-equilibrium reaction rate is a simple means by which information contained within non-equilibrium distribution functions predicted by the DSMC method can be included in the Macroscopic Chemistry Method.
Resumo:
The reconstruction of power industries has brought fundamental changes to both power system operation and planning. This paper presents a new planning method using multi-objective optimization (MOOP) technique, as well as human knowledge, to expand the transmission network in open access schemes. The method starts with a candidate pool of feasible expansion plans. Consequent selection of the best candidates is carried out through a MOOP approach, of which multiple objectives are tackled simultaneously, aiming at integrating the market operation and planning as one unified process in context of deregulated system. Human knowledge has been applied in both stages to ensure the selection with practical engineering and management concerns. The expansion plan from MOOP is assessed by reliability criteria before it is finalized. The proposed method has been tested with the IEEE 14-bus system and relevant analyses and discussions have been presented.
Resumo:
Modeling volcanic phenomena is complicated by free-surfaces often supporting large rheological gradients. Analytical solutions and analogue models provide explanations for fundamental characteristics of lava flows. But more sophisticated models are needed, incorporating improved physics and rheology to capture realistic events. To advance our understanding of the flow dynamics of highly viscous lava in Peléean lava dome formation, axi-symmetrical Finite Element Method (FEM) models of generic endogenous dome growth have been developed. We use a novel technique, the level-set method, which tracks a moving interface, leaving the mesh unaltered. The model equations are formulated in an Eulerian framework. In this paper we test the quality of this technique in our numerical scheme by considering existing analytical and experimental models of lava dome growth which assume a constant Newtonian viscosity. We then compare our model against analytical solutions for real lava domes extruded on Soufrière, St. Vincent, W.I. in 1979 and Mount St. Helens, USA in October 1980 using an effective viscosity. The level-set method is found to be computationally light and robust enough to model the free-surface of a growing lava dome. Also, by modeling the extruded lava with a constant pressure head this naturally results in a drop in extrusion rate with increasing dome height, which can explain lava dome growth observables more appropriately than when using a fixed extrusion rate. From the modeling point of view, the level-set method will ultimately provide an opportunity to capture more of the physics while benefiting from the numerical robustness of regular grids.