911 resultados para QUANTITATIVE PROTEOMICS
Resumo:
Metabolic labeling techniques have recently become popular tools for the quantitative profiling of proteomes. Classical stable isotope labeling with amino acids in cell cultures (SILAC) uses pairs of heavy/light isotopic forms of amino acids to introduce predictable mass differences in protein samples to be compared. After proteolysis, pairs of cognate precursor peptides can be correlated, and their intensities can be used for mass spectrometry-based relative protein quantification. We present an alternative SILAC approach by which two cell cultures are grown in media containing isobaric forms of amino acids, labeled either with 13C on the carbonyl (C-1) carbon or 15N on backbone nitrogen. Labeled peptides from both samples have the same nominal mass and nearly identical MS/MS spectra but generate upon fragmentation distinct immonium ions separated by 1 amu. When labeled protein samples are mixed, the intensities of these immonium ions can be used for the relative quantification of the parent proteins. We validated the labeling of cellular proteins with valine, isoleucine, and leucine with coverage of 97% of all tryptic peptides. We improved the sensitivity for the detection of the quantification ions on a pulsing instrument by using a specific fast scan event. The analysis of a protein mixture with a known heavy/light ratio showed reliable quantification. Finally the application of the technique to the analysis of two melanoma cell lines yielded quantitative data consistent with those obtained by a classical two-dimensional DIGE analysis of the same samples. Our method combines the features of the SILAC technique with the advantages of isobaric labeling schemes like iTRAQ. We discuss advantages and disadvantages of isobaric SILAC with immonium ion splitting as well as possible ways to improve it
Resumo:
BACKGROUND: The value of adenovirus plasma DNA detection as an indicator for adenovirus disease is unknown in the context of T cell-replete hematopoietic cell transplantation, of which adenovirus disease is an uncommon but serious complication. METHODS: Three groups of 62 T cell-replete hematopoietic cell transplant recipients were selected and tested for adenovirus in plasma by polymerase chain reaction. RESULTS: Adenovirus was detected in 21 (87.5%) of 24 patients with proven adenovirus disease (group 1), in 4 (21%) of 19 patients who shed adenovirus (group 2), and in 1 (10.5%) of 19 uninfected control patients. The maximum viral load was significantly higher in group 1 (median maximum viral load, 6.3x10(6) copies/mL; range, 0 to 1.0x10(9) copies/mL) than in group 2 (median maximum viral load, 0 copies/mL; range, 0 to 1.7x10(8) copies/mL; P<.001) and in group 3 (median maximum viral load, 0 copies/mL; range 0-40 copies/mL; P<.001). All patients in group 2 who developed adenoviremia had symptoms compatible with adenovirus disease (i.e., possible disease). A minimal plasma viral load of 10(3) copies/mL was detected in all patients with proven or possible disease. Adenoviremia was detectable at a median of 19.5 days (range, 8-48 days) and 24 days (range, 9-41 days) before death for patients with proven and possible adenovirus disease, respectively. CONCLUSION: Sustained or high-level adenoviremia appears to be a specific and sensitive indicator of adenovirus disease after T cell-replete hematopoietic cell transplantation. In the context of low prevalence of adenovirus disease, the use of polymerase chain reaction of plasma specimens to detect virus might be a valuable tool to identify and treat patients at risk for viral invasive disease.
Resumo:
Purpose: To evaluate the sensitivity of the perfusion parameters derived from Intravoxel Incoherent Motion (IVIM) MR imaging to hypercapnia-induced vasodilatation and hyperoxygenation-induced vasoconstriction in the human brain. Materials and Methods: This study was approved by the local ethics committee and informed consent was obtained from all participants. Images were acquired with a standard pulsed-gradient spin-echo sequence (Stejskal-Tanner) in a clinical 3-T system by using 16 b values ranging from 0 to 900 sec/mm(2). Seven healthy volunteers were examined while they inhaled four different gas mixtures known to modify brain perfusion (pure oxygen, ambient air, 5% CO(2) in ambient air, and 8% CO(2) in ambient air). Diffusion coefficient (D), pseudodiffusion coefficient (D*), perfusion fraction (f), and blood flow-related parameter (fD*) maps were calculated on the basis of the IVIM biexponential model, and the parametric maps were compared among the four different gas mixtures. Paired, one-tailed Student t tests were performed to assess for statistically significant differences. Results: Signal decay curves were biexponential in the brain parenchyma of all volunteers. When compared with inhaled ambient air, the IVIM perfusion parameters D*, f, and fD* increased as the concentration of inhaled CO(2) was increased (for the entire brain, P = .01 for f, D*, and fD* for CO(2) 5%; P = .02 for f, and P = .01 for D* and fD* for CO(2) 8%), and a trend toward a reduction was observed when participants inhaled pure oxygen (although P > .05). D remained globally stable. Conclusion: The IVIM perfusion parameters were reactive to hyperoxygenation-induced vasoconstriction and hypercapnia-induced vasodilatation. Accordingly, IVIM imaging was found to be a valid and promising method to quantify brain perfusion in humans. © RSNA, 2012.
Resumo:
Meta-analysis of prospective studies shows that quantitative ultrasound of the heel using validated devices predicts risk of different types of fracture with similar performance across different devices and in elderly men and women. These predictions are independent of the risk estimates from hip DXA measures.Introduction Clinical utilisation of heel quantitative ultrasound (QUS) depends on its power to predict clinical fractures. This is particularly important in settings that have no access to DXA-derived bone density measurements. We aimed to assess the predictive power of heel QUS for fractures using a meta-analysis approach.Methods We conducted an inverse variance random effects meta-analysis of prospective studies with heel QUS measures at baseline and fracture outcomes in their follow-up. Relative risks (RR) per standard deviation (SD) of different QUS parameters (broadband ultrasound attenuation [BUA], speed of sound [SOS], stiffness index [SI], and quantitative ultrasound index [QUI]) for various fracture outcomes (hip, vertebral, any clinical, any osteoporotic and major osteoporotic fractures) were reported based on study questions.Results Twenty-one studies including 55,164 women and 13,742 men were included in the meta-analysis with a total follow-up of 279,124 person-years. All four QUS parameters were associated with risk of different fracture. For instance, RR of hip fracture for 1 SD decrease of BUA was 1.69 (95% CI 1.43-2.00), SOS was 1.96 (95% CI 1.64-2.34), SI was 2.26 (95%CI 1.71-2.99) and QUI was 1.99 (95% CI 1.49-2.67). There was marked heterogeneity among studies on hip and any clinical fractures but no evidence of publication bias amongst them. Validated devices from different manufacturers predicted fracture risks with similar performance (meta-regression p values > 0.05 for difference of devices). QUS measures predicted fracture with a similar performance in men and women. Meta-analysis of studies with QUS measures adjusted for hip BMD showed a significant and independent association with fracture risk (RR/SD for BUA = 1.34 [95%CI 1.22-1.49]).Conclusions This study confirms that heel QUS, using validated devices, predicts risk of different fracture outcomes in elderly men and women. Further research is needed for more widespread utilisation of the heel QUS in clinical settings across the world.
Resumo:
Phagocytosis, whether of food particles in protozoa or bacteria and cell remnants in the metazoan immune system, is a conserved process. The particles are taken up into phagosomes, which then undergo complex remodeling of their components, called maturation. By using two-dimensional gel electrophoresis and mass spectrometry combined with genomic data, we identified 179 phagosomal proteins in the amoeba Dictyostelium, including components of signal transduction, membrane traffic, and the cytoskeleton. By carrying out this proteomics analysis over the course of maturation, we obtained time profiles for 1,388 spots and thus generated a dynamic record of phagosomal protein composition. Clustering of the time profiles revealed five clusters and 24 functional groups that were mapped onto a flow chart of maturation. Two heterotrimeric G protein subunits, Galpha4 and Gbeta, appeared at the earliest times. We showed that mutations in the genes encoding these two proteins produce a phagocytic uptake defect in Dictyostelium. This analysis of phagosome protein dynamics provides a reference point for future genetic and functional investigations.
Resumo:
The epithelial amiloride-sensitive sodium channel (ENaC) controls transepithelial Na+ movement in Na(+)-transporting epithelia and is associated with Liddle syndrome, an autosomal dominant form of salt-sensitive hypertension. Detailed analysis of ENaC channel properties and the functional consequences of mutations causing Liddle syndrome has been, so far, limited by lack of a method allowing specific and quantitative detection of cell-surface-expressed ENaC. We have developed a quantitative assay based on the binding of 125I-labeled M2 anti-FLAG monoclonal antibody (M2Ab*) directed against a FLAG reporter epitope introduced in the extracellular loop of each of the alpha, beta, and gamma ENaC subunits. Insertion of the FLAG epitope into ENaC sequences did not change its functional and pharmacological properties. The binding specificity and affinity (Kd = 3 nM) allowed us to correlate in individual Xenopus oocytes the macroscopic amiloride-sensitive sodium current (INa) with the number of ENaC wild-type and mutant subunits expressed at the cell surface. These experiments demonstrate that: (i) only heteromultimeric channels made of alpha, beta, and gamma ENaC subunits are maximally and efficiently expressed at the cell surface; (ii) the overall ENaC open probability is one order of magnitude lower than previously observed in single-channel recordings; (iii) the mutation causing Liddle syndrome (beta R564stop) enhances channel activity by two mechanisms, i.e., by increasing ENaC cell surface expression and by changing channel open probability. This quantitative approach provides new insights on the molecular mechanisms underlying one form of salt-sensitive hypertension.
Resumo:
Elucidating the evolution of Phlebotominae is important not only to revise their taxonomy, but also to help understand the origin of the genus Leishmania and its relationship with humans. Our study is a phenetic portrayal of this history based on the genetic relationships among some New Word and Old Word taxa. We used both multilocus enzyme electrophoresis and morphometry on 24 male specimens of the Old Word genus Phlebotomus (with three of its subgenera: Phlebotomus, Spelaeophlebotomus and Australophlebotomus), and on 67 male specimens of the three New World genera, Warileya, Brumptomyia and Lutzomyia, (with three subgenera of Lutzomyia: Lutzomyia, Oligodontomyia and Psychodopygus). Phenetic trees derived from both techniques were similar, but disclosed relationships that disagree with the present classification of sand flies. The need for a true evolutionary approach is stressed.
Resumo:
Morphological variation among geographic populations of the New World sand fly Lutzomyia quinquefer (Diptera, Phlebotominae) was analyzed and patterns detected that are probably associated with species emergence. This was achieved by examining the relationships of size and shape components of morphological attributes, and their correlation with geographic parameters. Quantitative and qualitative morphological characters are described, showing in both sexes differences among local populations from four Departments of Bolivia. Four arguments are then developed to reject the hypothesis of environment as the unique source of morphological variation: (1) the persistence of differences after removing the allometric consequences of size variation, (2) the association of local metric properties with meristic and qualitative attributes, rather than with altitude, (3) the positive and significant correlation between metric and geographic distances, and (4) the absence of a significant correlation between altitude and general-size of the insects.
Resumo:
Independent research jointly commissioned by the Department of Health, Social Services and Public Safety (DHSSPS) and the HSC R&D Division.
Resumo:
This article describes the composition of fingermark residue as being a complex system with numerous compounds coming from different sources and evolving over time from the initial composition (corresponding to the composition right after deposition) to the aged composition (corresponding to the evolution of the initial composition over time). This complex system will additionally vary due to effects of numerous influence factors grouped in five different classes: the donor characteristics, the deposition conditions, the substrate nature, the environmental conditions and the applied enhancement techniques. The initial and aged compositions as well as the influence factors are thus considered in this article to provide a qualitative and quantitative review of all compounds identified in fingermark residue up to now. The analytical techniques used to obtain these data are also enumerated. This review highlights the fact that despite the numerous analytical processes that have already been proposed and tested to elucidate fingermark composition, advanced knowledge is still missing. Thus, there is a real need to conduct future research on the composition of fingermark residue, focusing particularly on quantitative measurements, aging kinetics and effects of influence factors. The results of future research are particularly important for advances in fingermark enhancement and dating technique developments.
Resumo:
With the availability of new generation sequencing technologies, bacterial genome projects have undergone a major boost. Still, chromosome completion needs a costly and time-consuming gap closure, especially when containing highly repetitive elements. However, incomplete genome data may be sufficiently informative to derive the pursued information. For emerging pathogens, i.e. newly identified pathogens, lack of release of genome data during gap closure stage is clearly medically counterproductive. We thus investigated the feasibility of a dirty genome approach, i.e. the release of unfinished genome sequences to develop serological diagnostic tools. We showed that almost the whole genome sequence of the emerging pathogen Parachlamydia acanthamoebae was retrieved even with relatively short reads from Genome Sequencer 20 and Solexa. The bacterial proteome was analyzed to select immunogenic proteins, which were then expressed and used to elaborate the first steps of an ELISA. This work constitutes the proof of principle for a dirty genome approach, i.e. the use of unfinished genome sequences of pathogenic bacteria, coupled with proteomics to rapidly identify new immunogenic proteins useful to develop in the future specific diagnostic tests such as ELISA, immunohistochemistry and direct antigen detection. Although applied here to an emerging pathogen, this combined dirty genome sequencing/proteomic approach may be used for any pathogen for which better diagnostics are needed. These genome sequences may also be very useful to develop DNA based diagnostic tests. All these diagnostic tools will allow further evaluations of the pathogenic potential of this obligate intracellular bacterium.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
The tourism consumer’s purchase decision process is, to a great extent, conditioned by the image the tourist has of the different destinations that make up his or her choice set. In a highly competitive international tourist market, those responsible for destinations’ promotion and development policies seek differentiation strategies so that they may position the destinations in the most suitable market segments for their product in order to improve their attractiveness to visitors and increase or consolidate the economic benefits that tourism activity generates in their territory. To this end, the main objective we set ourselves in this paper is the empirical analysis of the factors that determine the image formation of Tarragona city as a cultural heritage destination. Without a doubt, UNESCO’s declaration of Tarragona’s artistic and monumental legacies as World Heritage site in the year 2000 meant important international recognition of the quality of the cultural and patrimonial elements offered by the city to the visitors who choose it as a tourist destination. It also represents a strategic opportunity to boost the city’s promotion of tourism and its consolidation as a unique destination given its cultural and patrimonial characteristics. Our work is based on the use of structured and unstructured techniques to identify the factors that determine Tarragona’s tourist destination image and that have a decisive influence on visitors’ process of choice of destination. In addition to being able to ascertain Tarragona’s global tourist image, we consider that the heterogeneity of its visitors requires a more detailed study that enables us to segment visitor typology. We consider that the information provided by these results may prove of great interest to those responsible for local tourism policy, both when designing products and when promoting the destination.