966 resultados para High Resolution Mass Spectrometry
Resumo:
Saffaj et al. recently criticized our method of monitoring carbon dioxide in human postmortem cardiac gas samples using Headspace-Gas Chromatography-Mass Spectrometry. According to the authors, their demonstration, based on the latest SFSTP guidelines (established after 2007 [1,2]) fitted for the validation of drug monitoring bioanalytical methods, has put in evidence potential errors. However, our validation approach was built using SFSTP guidelines established before 2007 [3-6]. We justify the use of these guidelines because of the post-mortem context of the study (and not clinical) and the gaseous state of the sample (and not solid or liquid). Using these guidelines, our validation remains correct.
Resumo:
To perform a climatic analysis of the annual UV index (UVI) variations in Catalonia, Spain (northeast of the Iberian Peninsula), a new simple parameterization scheme is presented based on a multilayer radiative transfer model. The parameterization performs fast UVI calculations for a wide range of cloudless and snow-free situations and can be applied anywhere. The following parameters are considered: solar zenith angle, total ozone column, altitude, aerosol optical depth, and single-scattering albedo. A sensitivity analysis is presented to justify this choice with special attention to aerosol information. Comparisons with the base model show good agreement, most of all for the most common cases, giving an absolute error within 0.2 in the UVI for a wide range of cases considered. Two tests are done to show the performance of the parameterization against UVI measurements. One uses data from a high-quality spectroradiometer from Lauder, New Zealand [45.04°S, 169.684°E, 370 m above mean sea level (MSL)], where there is a low presence of aerosols. The other uses data from a Robertson–Berger-type meter from Girona, Spain (41.97°N, 2.82°E, 100 m MSL), where there is more aerosol load and where it has been possible to study the effect of aerosol information on the model versus measurement comparison. The parameterization is applied to a climatic analysis of the annual UVI variation in Catalonia, showing the contributions of solar zenith angle, ozone, and aerosols. High-resolution seasonal maps of typical UV index values in Catalonia are presented
Resumo:
The most frequently used method to demonstrate testosterone abuse is the determination of the testosterone and epitestosterone concentration ratio (T/E ratio) in urine. Nevertheless, it is known that factors other than testosterone administration may increase the T/E ratio. In the last years, the determination of the carbon isotope ratio has proven to be the most promising method to help discriminate between naturally elevated T/E ratios and those reflecting T use. In this paper, an excretion study following oral administration of 40 mg testosterone undecanoate initially and 13 h later is presented. Four testosterone metabolites (androsterone, etiocholanolone, 5 alpha-androstanediol, and 5 beta-androstanediol) together with an endogenous reference (5 beta-pregnanediol) were extracted from the urines and the delta(13)C/(12)C ratio of each compound was analyzed by gas chromatography-combustion-isotope ratio mass spectrometry. The results show similar maximum delta(13)C-value variations (parts per thousand difference of delta(13)C/(12)C ratio from the isotope ratio standard) for the T metabolites and concomitant changes of the T/E ratios after administration of the first and the second dose of T. Whereas the T/E ratios as well as the androsterone, etiocholanolone and 5 alpha-androstanediol delta(13)C-values returned to the baseline 15 h after the second T administration, a decrease of the 5 beta-androstanediol delta-values could be detected for over 40 h. This suggests that measurements of 5 beta-androstanediol delta-values allow the detection of a testosterone ingestion over a longer post-administration period than other T metabolites delta(13)C-values or than the usual T/E ratio approach.
Resumo:
Projecte de recerca elaborat a partir d’una estada al Max Planck Institute for Human Cognitive and Brain Sciences, Alemanya, entre 2010 i 2012. El principal objectiu d’aquest projecte era estudiar en detall les estructures subcorticals, en concret, el rol dels ganglis basals en control cognitiu durant processament lingüístic i no-lingüístic. Per tal d’assolir una diferenciació minuciosa en els diferents nuclis dels ganglis basals s’utilitzà ressonància magnètica d’ultra-alt camp i alta resolució (7T-MRI). El còrtex prefrontal lateral i els ganglis basals treballant conjuntament per a mitjançar memòria de treball i la regulació “top-down” de la cognició. Aquest circuit regula l’equilibri entre respostes automàtiques i d’alt-ordre cognitiu. Es crearen tres condicions experimentals principals: frases/seqüències noambigües, no-gramatical i ambigües. Les frases/seqüències no-ambigües haurien de provocar una resposta automàtica, mentre les frases/seqüències ambigües i no-gramaticals produïren un conflicte amb la resposta automàtica, i per tant, requeririen una resposta de d’alt-ordre cognitiu. Dins del domini de la resposta de control, la ambigüitat i no-gramaticalitat representen dues dimensions diferents de la resolució de conflicte, mentre per una frase/seqüència temporalment ambigua existeix una interpretació correcte, aquest no és el cas per a les frases/seqüències no-gramaticals. A més, el disseny experimental incloïa una manipulació lingüística i nolingüística, la qual posà a prova la hipòtesi que els efectes són de domini-general; així com una manipulació semàntica i sintàctica que avaluà les diferències entre el processament d’ambigüitat/error “intrínseca” vs. “estructural”. Els resultats del primer experiment (sintax-lingüístic) mostraren un gradient rostroventralcaudodorsal de control cognitiu dins del nucli caudat, això és, les regions més rostrals sostenint els nivells més alts de processament cognitiu
Resumo:
A generic LC-MS approach for the absolute quantification of undigested peptides in plasma at mid-picomolar levels is described. Nine human peptides namely, brain natriuretic peptide (BNP), substance P (SubP), parathyroid hormone 1-34 (PTH), C-peptide, orexines A and B (Orex-A and -B), oxytocin (Oxy), gonadoliberin-1 (gonadothropin releasing-hormone or luteinizing hormone-releasing hormone, LHRH) and α-melanotropin (α-MSH) were targeted. Plasma samples were extracted via a 2-step procedure: protein precipitation using 1vol of acetonitrile followed by ultrafiltration of supernatants on membranes with a MW cut-off of 30 kDa. By applying a specific LC-MS setup, large volumes of filtrates (e.g., 2×750 μL) were injected and the peptides were trapped on a 1mm i.d.×10 mm length C8 column using a 10× on-line dilution. Then, the peptides were back-flushed and a second on-line dilution (2×) was applied during the transfer step. The refocalized peptides were resolved on a 0.3mm i.d. C18 analytical column. Extraction recovery, matrix effect and limits of detection were evaluated. Our comprehensive protocol demonstrates a simple and efficient sample preparation procedure followed by the analysis of peptides with limits of detection in the mid-picomolar range. This generic approach can be applied for the determination of most therapeutic peptides and possibly for endogenous peptides with latest state-of-the-art instruments.
Resumo:
Annotation of protein-coding genes is a key goal of genome sequencing projects. In spite of tremendous recent advances in computational gene finding, comprehensive annotation remains a challenge. Peptide mass spectrometry is a powerful tool for researching the dynamic proteome and suggests an attractive approach to discover and validate protein-coding genes. We present algorithms to construct and efficiently search spectra against a genomic database, with no prior knowledge of encoded proteins. By searching a corpus of 18.5 million tandem mass spectra (MS/MS) from human proteomic samples, we validate 39,000 exons and 11,000 introns at the level of translation. We present translation-level evidence for novel or extended exons in 16 genes, confirm translation of 224 hypothetical proteins, and discover or confirm over 40 alternative splicing events. Polymorphisms are efficiently encoded in our database, allowing us to observe variant alleles for 308 coding SNPs. Finally, we demonstrate the use of mass spectrometry to improve automated gene prediction, adding 800 correct exons to our predictions using a simple rescoring strategy. Our results demonstrate that proteomic profiling should play a role in any genome sequencing project.
Resumo:
The transpressional boundary between the Australian and Pacific plates in the central South Island of New Zealand comprises the Alpine Fault and a broad region of distributed strain concentrated in the Southern Alps but encompassing regions further to the east, including the northwest Canterbury Plains. Low to moderate levels of seismicity (e. g., 2 > M 5 events since 1974 and 2 > M 4.0 in 2009) and Holocene sediments offset or disrupted along rare exposed active fault segments are evidence for ongoing tectonism in the northwest plains, the surface topography of which is remarkably flat and even. Because the geology underlying the late Quaternary alluvial fan deposits that carpet most of the plains is not established, the detailed tectonic evolution of this region and the potential for larger earthquakes is only poorly understood. To address these issues, we have processed and interpreted high-resolution (2.5 m subsurface sampling interval) seismic data acquired along lines strategically located relative to extensive rock exposures to the north, west, and southwest and rare exposures to the east. Geological information provided by these rock exposures offer important constraints on the interpretation of the seismic data. The processed seismic reflection sections image a variably thick layer of generally undisturbed younger (i.e., < 24 ka) Quaternary alluvial sediments unconformably overlying an older (> 59 ka) Quaternary sedimentary sequence that shows evidence of moderate faulting and folding during and subsequent to deposition. These Quaternary units are in unconformable contact with Late Cretaceous-Tertiary interbedded sedimentary and volcanic rocks that are highly faulted, folded, and tilted. The lowest imaged unit is largely reflection-free Permian Triassic basement rocks. Quaternary-age deformation has affected all the rocks underlying the younger alluvial sediments, and there is evidence for ongoing deformation. Eight primary and numerous secondary faults as well as a major anticlinal fold are revealed on the seismic sections. Folded sedimentary and volcanic units are observed in the hanging walls and footwalls of most faults. Five of the primary faults represent plausible extensions of mapped faults, three of which are active. The major anticlinal fold is the probable continuation of known active structure. A magnitude 7.1 earthquake occurred on 4 September 2010 near the southeastern edge of our study area. This predominantly right-lateral strike-slip event and numerous aftershocks (ten with magnitudes >= 5 within one week of the main event) highlight the primary message of our paper: that the generally flat and topographically featureless Canterbury Plains is underlain by a network of active faults that have the potential to generate significant earthquakes.
Resumo:
A 41-year-old male presented with severe frostbite that was monitored clinically and with a new laser Doppler imaging (LDI) camera that records arbitrary microcirculatory perfusion units (1-256 arbitrary perfusion units (APU's)). LDI monitoring detected perfusion differences in hand and foot not seen visually. On day 4-5 after injury, LDI showed that while fingers did not experience any significant perfusion change (average of 31±25 APUs on day 5), the patient's left big toe did (from 17±29 APUs day 4 to 103±55 APUs day 5). These changes in regional perfusion were not detectable by visual examination. On day 53 postinjury, all fingers with reduced perfusion by LDI were amputated, while the toe could be salvaged. This case clearly demonstrates that insufficient microcirculatory perfusion can be identified using LDI in ways which visual examination alone does not permit, allowing prognosis of clinical outcomes. Such information may also be used to develop improved treatment approaches.
Resumo:
A gas chromatographic-mass spectrometric method is presented which allows the simultaneous determination of the plasma concentrations of fluvoxamine and of the enantiomers of fluoxetine and norfluoxetine after derivatization with the chiral reagent, (S)-(-)-N-trifluoroacetylprolyl chloride. No interference was observed from endogenous compounds following the extraction of plasma samples from six different human subjects. The standard curves were linear over a working range of 10 to 750 ng/ml for racemic fluoxetine and norfluoxetine and of 50 to 500 ng/ml for fluvoxamine. Recoveries ranged from 50 to 66% for the three compounds. Intra- and inter-day coefficients of variation ranged from 4 to 10% for fluvoxamine and from 4 to 13% for fluoxetine and norfluoxetine. The limits of quantitation of the method were found to be 2 ng/ml for fluvoxamine and 1 ng/ml for the (R)- and (S)-enantiomers of fluoxetine and norfluoxetine, hence allowing its use for single dose pharmacokinetics. Finally, by using a steeper gradient of temperature, much shorter analysis times are obtained if one is interested in the concentrations of fluvoxamine alone.
Resumo:
OBJECT: To study a scan protocol for coronary magnetic resonance angiography based on multiple breath-holds featuring 1D motion compensation and to compare the resulting image quality to a navigator-gated free-breathing acquisition. Image reconstruction was performed using L1 regularized iterative SENSE. MATERIALS AND METHODS: The effects of respiratory motion on the Cartesian sampling scheme were minimized by performing data acquisition in multiple breath-holds. During the scan, repetitive readouts through a k-space center were used to detect and correct the respiratory displacement of the heart by exploiting the self-navigation principle in image reconstruction. In vivo experiments were performed in nine healthy volunteers and the resulting image quality was compared to a navigator-gated reference in terms of vessel length and sharpness. RESULTS: Acquisition in breath-hold is an effective method to reduce the scan time by more than 30 % compared to the navigator-gated reference. Although an equivalent mean image quality with respect to the reference was achieved with the proposed method, the 1D motion compensation did not work equally well in all cases. CONCLUSION: In general, the image quality scaled with the robustness of the motion compensation. Nevertheless, the featured setup provides a positive basis for future extension with more advanced motion compensation methods.
Resumo:
Little is known about how human amnesia affects the activation of cortical networks during memory processing. In this study, we recorded high-density evoked potentials in 12 healthy control subjects and 11 amnesic patients with various types of brain damage affecting the medial temporal lobes, diencephalic structures, or both. Subjects performed a continuous recognition task composed of meaningful designs. Using whole-scalp spatiotemporal mapping techniques, we found that, during the first 200 ms following picture presentation, map configuration of amnesics and controls were indistinguishable. Beyond this period, processing significantly differed. Between 200 and 350 ms, amnesic patients expressed different topographical maps than controls in response to new and repeated pictures. From 350 to 550 ms, healthy subjects showed modulation of the same maps in response to new and repeated items. In amnesics, by contrast, presentation of repeated items induced different maps, indicating distinct cortical processing of new and old information. The study indicates that cortical mechanisms underlying memory formation and re-activation in amnesia fundamentally differ from normal memory processing.
Resumo:
Abstract Accurate characterization of the spatial distribution of hydrological properties in heterogeneous aquifers at a range of scales is a key prerequisite for reliable modeling of subsurface contaminant transport, and is essential for designing effective and cost-efficient groundwater management and remediation strategies. To this end, high-resolution geophysical methods have shown significant potential to bridge a critical gap in subsurface resolution and coverage between traditional hydrological measurement techniques such as borehole log/core analyses and tracer or pumping tests. An important and still largely unresolved issue, however, is how to best quantitatively integrate geophysical data into a characterization study in order to estimate the spatial distribution of one or more pertinent hydrological parameters, thus improving hydrological predictions. Recognizing the importance of this issue, the aim of the research presented in this thesis was to first develop a strategy for the assimilation of several types of hydrogeophysical data having varying degrees of resolution, subsurface coverage, and sensitivity to the hydrologic parameter of interest. In this regard a novel simulated annealing (SA)-based conditional simulation approach was developed and then tested in its ability to generate realizations of porosity given crosshole ground-penetrating radar (GPR) and neutron porosity log data. This was done successfully for both synthetic and field data sets. A subsequent issue that needed to be addressed involved assessing the potential benefits and implications of the resulting porosity realizations in terms of groundwater flow and contaminant transport. This was investigated synthetically assuming first that the relationship between porosity and hydraulic conductivity was well-defined. Then, the relationship was itself investigated in the context of a calibration procedure using hypothetical tracer test data. Essentially, the relationship best predicting the observed tracer test measurements was determined given the geophysically derived porosity structure. Both of these investigations showed that the SA-based approach, in general, allows much more reliable hydrological predictions than other more elementary techniques considered. Further, the developed calibration procedure was seen to be very effective, even at the scale of tomographic resolution, for predictions of transport. This also held true at locations within the aquifer where only geophysical data were available. This is significant because the acquisition of hydrological tracer test measurements is clearly more complicated and expensive than the acquisition of geophysical measurements. Although the above methodologies were tested using porosity logs and GPR data, the findings are expected to remain valid for a large number of pertinent combinations of geophysical and borehole log data of comparable resolution and sensitivity to the hydrological target parameter. Moreover, the obtained results allow us to have confidence for future developments in integration methodologies for geophysical and hydrological data to improve the 3-D estimation of hydrological properties.