172 resultados para quantification of aggregates


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of the present study was to retrospectively estimate the absorbed dose to kidneys in 17 patients treated in clinical practice with 90Y-ibritumomab tiuxetan for non-Hodgkin's lymphoma, using appropriate dosimetric approaches available. METHODS: The single-view effective point source method, including background subtraction, is used for planar quantification of renal activity. Since the high uptake in the liver affects the activity estimate in the right kidney, the dose to the left kidney serves as a surrogate for the dose to both kidneys. Calculation of absorbed dose is based on the Medical Internal Radiation Dose methodology with adjustment for patient kidney mass. RESULTS: The median dose to kidneys, based on the left kidney only, is 2.1 mGy/MBq (range, 0.92-4.4), whereas a value of 2.5 mGy/MBq (range, 1.5-4.7) is obtained, considering the activity in both kidneys. CONCLUSIONS: Irrespective of the method, doses to kidneys obtained in the present study were about 10 times higher than the median dose of 0.22 mGy/MBq (range, 0.00-0.95) were originally reported from the study leading to Food and Drug Administration approval. Our results are in good agreement with kidney-dose estimates recently reported from high-dose myeloablative therapy with 90Y-ibritumomab tiuxetan.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The identification and quantification of proteins and lipids is of major importance for the diagnosis, prognosis and understanding of the molecular mechanisms involved in disease development. Owing to its selectivity and sensitivity, mass spectrometry has become a key technique in analytical platforms for proteomic and lipidomic investigations. Using this technique, many strategies have been developed based on unbiased or targeted approaches to highlight or monitor molecules of interest from biomatrices. Although these approaches have largely been employed in cancer research, this type of investigation has been met by a growing interest in the field of cardiovascular disorders, potentially leading to the discovery of novel biomarkers and the development of new therapies. In this paper, we will review the different mass spectrometry-based proteomic and lipidomic strategies applied in cardiovascular diseases, especially atherosclerosis. Particular attention will be given to recent developments and the role of bioinformatics in data treatment. This review will be of broad interest to the medical community by providing a tutorial of how mass spectrometric strategies can support clinical trials.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Conformational changes of channel activation: Five enhanced green fluorescent protein (EGFP) molecules (green cylinders) were integrated into the intracellular part of the homopentameric ionotropic 5-HT3 receptor. This allowed the detection of extracellular binding of fluorescent ligands (?) to EGFP by FRET, and also enabled the quantification of agonist-induced conformational changes in the intracellular region of the receptor by homo-FRET between EGFPs. The approach opens novel ways for probing receptor activation and functional screening of therapeutic compounds.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tobacco consumption is a global epidemic responsible for a vast burden of disease. With pharmacological properties sought-after by consumers and responsible for addiction issues, nicotine is the main reason of this phenomenon. Accordingly, smokeless tobacco products are of growing popularity in sport owing to potential performance enhancing properties and absence of adverse effects on the respiratory system. Nevertheless, nicotine does not appear on the 2011 World Anti-Doping Agency (WADA) Prohibited List or Monitoring Program by lack of a comprehensive large-scale prevalence survey. Thus, this work describes a one-year monitoring study on urine specimens from professional athletes of different disciplines covering 2010 and 2011. A method for the detection and quantification of nicotine, its major metabolites (cotinine, trans-3-hydroxycotinine, nicotine-N'-oxide and cotinine-N-oxide) and minor tobacco alkaloids (anabasine, anatabine and nornicotine) was developed, relying on ultra-high pressure liquid chromatography coupled to triple quadrupole mass spectrometry (UHPLC-TQ-MS/MS). A simple and fast dilute-and-shoot sample treatment was performed, followed by hydrophilic interaction chromatography-tandem mass spectrometry (HILIC-MS/MS) operated in positive electrospray ionization (ESI) mode with multiple reaction monitoring (MRM) data acquisition. After method validation, assessing the prevalence of nicotine consumption in sport involved analysis of 2185 urine samples, accounting for 43 different sports. Concentrations distribution of major nicotine metabolites, minor nicotine metabolites and tobacco alkaloids ranged from 10 (LLOQ) to 32,223, 6670 and 538 ng/mL, respectively. Compounds of interest were detected in trace levels in 23.0% of urine specimens, with concentration levels corresponding to an exposure within the last three days for 18.3% of samples. Likewise, hypothesizing conservative concentration limits for active nicotine consumption prior and/or during sport practice (50 ng/mL for nicotine, cotinine and trans-3-hydroxycotinine and 25 ng/mL for nicotine-N'-oxide, cotinine-N-oxide, anabasine, anatabine and nornicotine) revealed a prevalence of 15.3% amongst athletes. While this number may appear lower than the worldwide smoking prevalence of around 25%, focusing the study on selected sports highlighted more alarming findings. Indeed, active nicotine consumption in ice hockey, skiing, biathlon, bobsleigh, skating, football, basketball, volleyball, rugby, American football, wrestling and gymnastics was found to range between 19.0 and 55.6%. Therefore, considering the adverse effects of smoking on the respiratory tract and numerous health threats detrimental to sport practice at top level, likelihood of smokeless tobacco consumption for performance enhancement is greatly supported.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Diabetes mellitus (DM) is a major cause of peripheral neuropathy. More than 220 million people worldwide suffer from type 2 DM, which will, in approximately half of them, lead to the development of diabetic peripheral neuropathy. While of significant medical importance, the pathophysiological changes present in DPN are still poorly understood. To get more insight into DPN associated with type 2 DM, we decided to use the rodent model of this form of diabetes, the db/db mice. During the in-vivo conduction velocity studies on these animals, we observed the presence of multiple spiking followed by a single stimulation. This prompted us to evaluate the excitability properties of db/db peripheral nerves. Ex-vivo electrophysiological evaluation revealed a significant increase in the excitability of db/db sciatic nerves. While the shape and kinetics of the compound action potential of db/db nerves were the same as for control nerves, we observed an increase in the after-hyperpolarization phase (AHP) under diabetic conditions. Using pharmacological inhibitors we demonstrated that both the peripheral nerve hyperexcitability (PNH) and the increased AHP were mostly mediated by the decreased activity of Kv1-channels. Importantly, we corroborated these data at the molecular level. We observed a strong reduction of Kv1.2 channel presence in the juxtaparanodal regions of teased fibers in db/db mice as compared to control mice. Quantification of the amount of both Kv1.2 isoforms in DRG neurons and in the endoneurial compartment of peripheral nerve by Western blotting revealed that less mature Kv1.2 was integrated into the axonal membranes at the juxtaparanodes. Our observation that peripheral nerve hyperexcitability present in db/db mice is at least in part a consequence of changes in potassium channel distribution suggests that the same mechanism also mediates PNH in diabetic patients. ∗Current address: Department of Physiology, UCSF, San Francisco, CA, USA.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Metabolic labeling techniques have recently become popular tools for the quantitative profiling of proteomes. Classical stable isotope labeling with amino acids in cell cultures (SILAC) uses pairs of heavy/light isotopic forms of amino acids to introduce predictable mass differences in protein samples to be compared. After proteolysis, pairs of cognate precursor peptides can be correlated, and their intensities can be used for mass spectrometry-based relative protein quantification. We present an alternative SILAC approach by which two cell cultures are grown in media containing isobaric forms of amino acids, labeled either with 13C on the carbonyl (C-1) carbon or 15N on backbone nitrogen. Labeled peptides from both samples have the same nominal mass and nearly identical MS/MS spectra but generate upon fragmentation distinct immonium ions separated by 1 amu. When labeled protein samples are mixed, the intensities of these immonium ions can be used for the relative quantification of the parent proteins. We validated the labeling of cellular proteins with valine, isoleucine, and leucine with coverage of 97% of all tryptic peptides. We improved the sensitivity for the detection of the quantification ions on a pulsing instrument by using a specific fast scan event. The analysis of a protein mixture with a known heavy/light ratio showed reliable quantification. Finally the application of the technique to the analysis of two melanoma cell lines yielded quantitative data consistent with those obtained by a classical two-dimensional DIGE analysis of the same samples. Our method combines the features of the SILAC technique with the advantages of isobaric labeling schemes like iTRAQ. We discuss advantages and disadvantages of isobaric SILAC with immonium ion splitting as well as possible ways to improve it

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Indirect calorimetry based on respiratory exchange measurement has been successfully used from the beginning of the century to obtain an estimate of heat production (energy expenditure) in human subjects and animals. The errors inherent to this classical technique can stem from various sources: 1) model of calculation and assumptions, 2) calorimetric factors used, 3) technical factors and 4) human factors. The physiological and biochemical factors influencing the interpretation of calorimetric data include a change in the size of the bicarbonate and urea pools and the accumulation or loss (via breath, urine or sweat) of intermediary metabolites (gluconeogenesis, ketogenesis). More recently, respiratory gas exchange data have been used to estimate substrate utilization rates in various physiological and metabolic situations (fasting, post-prandial state, etc.). It should be recalled that indirect calorimetry provides an index of overall substrate disappearance rates. This is incorrectly assumed to be equivalent to substrate "oxidation" rates. Unfortunately, there is no adequate golden standard to validate whole body substrate "oxidation" rates, and this contrasts to the "validation" of heat production by indirect calorimetry, through use of direct calorimetry under strict thermal equilibrium conditions. Tracer techniques using stable (or radioactive) isotopes, represent an independent way of assessing substrate utilization rates. When carbohydrate metabolism is measured with both techniques, indirect calorimetry generally provides consistent glucose "oxidation" rates as compared to isotopic tracers, but only when certain metabolic processes (such as gluconeogenesis and lipogenesis) are minimal or / and when the respiratory quotients are not at the extreme of the physiological range. However, it is believed that the tracer techniques underestimate true glucose "oxidation" rates due to the failure to account for glycogenolysis in the tissue storing glucose, since this escapes the systemic circulation. A major advantage of isotopic techniques is that they are able to estimate (given certain assumptions) various metabolic processes (such as gluconeogenesis) in a noninvasive way. Furthermore when, in addition to the 3 macronutrients, a fourth substrate is administered (such as ethanol), isotopic quantification of substrate "oxidation" allows one to eliminate the inherent assumptions made by indirect calorimetry. In conclusion, isotopic tracers techniques and indirect calorimetry should be considered as complementary techniques, in particular since the tracer techniques require the measurement of carbon dioxide production obtained by indirect calorimetry. However, it should be kept in mind that the assessment of substrate oxidation by indirect calorimetry may involve large errors in particular over a short period of time. By indirect calorimetry, energy expenditure (heat production) is calculated with substantially less error than substrate oxidation rates.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Activation of the mitogen-activated protein (MAP) kinase cascade by progesterone in Xenopus oocytes leads to a marked down-regulation of activity of the amiloride-sensitive epithelial sodium channel (ENaC). Here we have studied the signaling pathways involved in progesterone effect on ENaC activity. We demonstrate that: (i) the truncation of the C termini of the alphabetagammaENaC subunits results in the loss of the progesterone effect on ENaC; (ii) the effect of progesterone was also suppressed by mutating conserved tyrosine residues in the Pro-X-X-Tyr (PY) motif of the C termini of the beta and gamma ENaC subunits (beta(Y618A) and gamma(Y628A)); (iii) the down-regulation of ENaC activity by progesterone was also suppressed by co-expression ENaC subunits with a catalytically inactive mutant of Nedd4-2, a ubiquitin ligase that has been previously demonstrated to decrease ENaC cell-surface expression via a ubiquitin-dependent internalization/degradation mechanism; (iv) the effect of progesterone was significantly reduced by suppression of consensus sites (beta(T613A) and gamma(T623A)) for ENaC phosphorylation by the extracellular-regulated kinase (ERK), a MAP kinase previously shown to facilitate the binding of Nedd4 ubiquitin ligases to ENaC; (v) the quantification of cell-surface-expressed ENaC subunits revealed that progesterone decreases ENaC open probability (whole cell P(o), wcP(o)) and not its cell-surface expression. Collectively, these results demonstrate that the binding of active Nedd4-2 to ENaC is a crucial step in the mechanism of ENaC inhibition by progesterone. Upon activation of ERK, the effect of Nedd4-2 on ENaC open probability can become more important than its effect on ENaC cell-surface expression.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Magnetic resonance imaging is a rapidly developing modality in cardiology. It offers an excellent image definition and a large field of view, allowing a more accurate morphological assessment of cardiac malformations. Due to its unique versatility and its ability to provide myocardial tissue characterization, cardiac magnetic resonance (CMR) is now recognized as a central imaging modality for a wide range of congenital heart diseases, including assessment of post-surgical cardiac anatomy, quantification of valvular disease and detection of myocardial ischemia. CMR provides useful diagnostic information without any radiation exposure, and improves the global management of patients with congenital heart disease.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

ABSTRACT : The retina is one of the most important human sensory tissues since it detects and transmits all visual information from the outside world to the brain. Retinitis pigmentosa (RP) is the name given to a group of inherited diseases that affect specifically the photoreceptors present in the retina and in many instances lead to blindness. Dominant mutations in PRPF31, a gene that encodes for a pre-mRNA splicing factor, cause retinitis pigmentosa with reduced penetrance. We functionally investigated a novel mutation, identified in a large family with autosomal dominant RP, and 7 other mutations, substitutions and microdeletions, in 12 patients from 7 families with PRPF31-linked RP. Seven mutations lead to PRPF31 mRNA with premature stop codons and one to mRNA lacking the exon containing the initiation codon. Quantification of PRPF31 mRNA and protein levels revealed a significant reduction in cell lines derived from patients, compared to non carriers of mutations in PRPF31. Allelic quantification of PRPF31 mRNA indicated that the level of mutated mRNA is very low compared to wild-type mRNA. No mutant protein was detected and the subnuclear localization of wild-type PRPF31 remains the same in cell lines from patients and controls. Blocking nonsense-mediated mRNA decay in cell lines derived from patients partially restored PRPF31 mutated mRNA but derived proteins were still undetectable, even when protein degradation pathways were inhibited. Our results demonstrated that the vast majority of PRPF31 mutations result in null alleles, since they are subject to surveillance mechanisms that degrade mutated mRNA and possibly block its translation. Altogether, these data indicate that the likely cause of PRPF31-linked RP is haploinsufficiency, rather than a dominant negative effect. Penetrance of PRPF31 mutations has been previously demonstrated to be inversely correlated with the level of PRPF31 mRNA, since high expression of wild-type PRPF31 mRNA protects from the disease. Consequently, we have investigated the genetic modifiers that control the expression of PRPF31 by quantifying PRPF31 mRNA levels in cell lines derived from 200 individuals from 15 families representative of the general population. By linkage analyses we identified a 8.2Mb-region on chromosome 14q21-23 that contains a gene involved in the modulation of PRPF31 expression. We also assessed apreviously-mapped penetrance factor invariably located on the wild-type allele and linked to the PRPF31 locus in asymptomatic patients from different families with RP. We demonstrated that this modifier increases the expression of both PRPF31 alleles already at the pre-mRNA level. Finally, our data suggest that PRPF31 mRNA expression and consequently the penetrance of PRPF31 mutations is modulated by at least 2 diffusible compounds, which act on both PRPF31 alleles during their transcription.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recently, the spin-echo full-intensity acquired localized (SPECIAL) spectroscopy technique was proposed to unite the advantages of short TEs on the order of milliseconds (ms) with full sensitivity and applied to in vivo rat brain. In the present study, SPECIAL was adapted and optimized for use on a clinical platform at 3T and 7T by combining interleaved water suppression (WS) and outer volume saturation (OVS), optimized sequence timing, and improved shimming using FASTMAP. High-quality single voxel spectra of human brain were acquired at TEs below or equal to 6 ms on a clinical 3T and 7T system for six volunteers. Narrow linewidths (6.6 +/- 0.6 Hz at 3T and 12.1 +/- 1.0 Hz at 7T for water) and the high signal-to-noise ratio (SNR) of the artifact-free spectra enabled the quantification of a neurochemical profile consisting of 18 metabolites with Cramér-Rao lower bounds (CRLBs) below 20% at both field strengths. The enhanced sensitivity and increased spectral resolution at 7T compared to 3T allowed a two-fold reduction in scan time, an increased precision of quantification for 12 metabolites, and the additional quantification of lactate with CRLB below 20%. Improved sensitivity at 7T was also demonstrated by a 1.7-fold increase in average SNR (= peak height/root mean square [RMS]-of-noise) per unit-time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Gliomas are routinely graded according to histopathological criteria established by the World Health Organization. Although this classification can be used to understand some of the variance in the clinical outcome of patients, there is still substantial heterogeneity within and between lesions of the same grade. This study evaluated image-guided tissue samples acquired from a large cohort of patients presenting with either new or recurrent gliomas of grades II-IV using ex vivo proton high-resolution magic angle spinning spectroscopy. The quantification of metabolite levels revealed several discrete profiles associated with primary glioma subtypes, as well as secondary subtypes that had undergone transformation to a higher grade at the time of recurrence. Statistical modeling further demonstrated that these metabolomic profiles could be differentially classified with respect to pathological grading and inter-grade conversions. Importantly, the myo-inositol to total choline index allowed for a separation of recurrent low-grade gliomas on different pathological trajectories, the heightened ratio of phosphocholine to glycerophosphocholine uniformly characterized several forms of glioblastoma multiforme, and the onco-metabolite D-2-hydroxyglutarate was shown to help distinguish secondary from primary grade IV glioma, as well as grade II and III from grade IV glioma. These data provide evidence that metabolite levels are of interest in the assessment of both intra-grade and intra-lesional malignancy. Such information could be used to enhance the diagnostic specificity of in vivo spectroscopy and to aid in the selection of the most appropriate therapy for individual patients.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Despite the central role of quantitative PCR (qPCR) in the quantification of mRNA transcripts, most analyses of qPCR data are still delegated to the software that comes with the qPCR apparatus. This is especially true for the handling of the fluorescence baseline. This article shows that baseline estimation errors are directly reflected in the observed PCR efficiency values and are thus propagated exponentially in the estimated starting concentrations as well as 'fold-difference' results. Because of the unknown origin and kinetics of the baseline fluorescence, the fluorescence values monitored in the initial cycles of the PCR reaction cannot be used to estimate a useful baseline value. An algorithm that estimates the baseline by reconstructing the log-linear phase downward from the early plateau phase of the PCR reaction was developed and shown to lead to very reproducible PCR efficiency values. PCR efficiency values were determined per sample by fitting a regression line to a subset of data points in the log-linear phase. The variability, as well as the bias, in qPCR results was significantly reduced when the mean of these PCR efficiencies per amplicon was used in the calculation of an estimate of the starting concentration per sample.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objectives: Magnetic resonance (MR) imaging and spectroscopy (MRS) allow the establishment of the anatomical evolution and neurochemical profiles of ischemic lesions. The aim of the present study was to identify markers of reversible and irreversible damage by comparing the effects of 10-mins middle cerebral artery occlusion (MCAO), mimicking a transient ischemic attack, with the effects of 30-mins MCAO, inducing a striatal lesion. Methods: ICR-CD1 mice were subjected to 10-mins (n = 11) or 30-mins (n = 9) endoluminal MCAO by filament technique at 0 h. The regional cerebral blood flow (CBF) was monitored in all animals by laser- Doppler flowmetry with a flexible probe fixed on the skull with < 20% of baseline CBF during ischemia and > 70% during reperfusion. All MR studies were carried out in a horizontal 14.1T magnet. Fast spin echo images with T2-weighted parameters were acquired to localize the volume of interest and evaluate the lesion size. Immediately after adjustment of field inhomogeneities, localized 1H MRS was applied to obtain the neurochemical profile from the striatum (6 to 8 microliters). Six animals (sham group) underwent nearly identical procedures without MCAO. Results: The 10-mins MCAO induced no MR- or histologically detectable lesion in most of the mice and a small lesion in some of them. We thus had two groups with the same duration of ischemia but a different outcome, which could be compared to sham-operated mice and more severe ischemic mice (30-mins MCAO). Lactate increase, a hallmark of ischemic insult, was only detected significantly after 30-mins MCAO, whereas at 3 h post ischemia, glutamine was increased in all ischemic mice independently of duration and outcome. In contrast, glutamate, and even more so, N-acetyl-aspartate, decreased only in those mice exhibiting visible lesions on T2-weighted images at 24 h. Conclusions: These results suggest that an increased glutamine/glutamate ratio is a sensitive marker indicating the presence of an excitotoxic insult. Glutamate and NAA, on the other hand, appear to predict permanent neuronal damage. In conclusion, as early as 3 h post ischemia, it is possible to identify early metabolic markers manifesting the presence of a mild ischemic insult as well as the lesion outcome at 24 h.