993 resultados para Quantitative ultrasound
Resumo:
The success of Matrix-assisted laser desorption / ionisation (MALDI) in fields such as proteomics has partially but not exclusively been due to the development of improved data acquisition and sample preparation techniques. This has been required to overcome some of the short comings of the commonly used solid-state MALDI matrices such as - cyano-4-hydroxycinnamic acid (CHCA) and 2,5-dihydroxybenzoic acid (DHB). Solid state matrices form crystalline samples with highly inhomogeneous topography and morphology which results in large fluctuations in analyte signal intensity from spot to spot and positions within the spot. This means that efficient tuning of the mass spectrometer can be impeded and the use of MALDI MS for quantitative measurements is severely impeded. Recently new MALDI liquid matrices have been introduced which promise to be an effective alternative to crystalline matrices. Generally the liquid matrices comprise either ionic liquid matrices (ILMs) or a usually viscous liquid matrix which is doped with a UV lightabsorbing chromophore [1-3]. The advantages are that the droplet surface is smooth and relatively uniform with the analyte homogeneously distributed within. They have the ability to replenish a sampling position between shots negating the need to search for sample hot-spots. Also the liquid nature of the matrix allows for the use of additional additives to change the environment to which the analyte is added.
Resumo:
Quantitative analysis by mass spectrometry (MS) is a major challenge in proteomics as the correlation between analyte concentration and signal intensity is often poor due to varying ionisation efficiencies in the presence of molecular competitors. However, relative quantitation methods that utilise differential stable isotope labelling and mass spectrometric detection are available. Many drawbacks inherent to chemical labelling methods (ICAT, iTRAQ) can be overcome by metabolic labelling with amino acids containing stable isotopes (e.g. 13C and/or 15N) in methods such as Stable Isotope Labelling with Amino acids in Cell culture (SILAC). SILAC has also been used for labelling of proteins in plant cell cultures (1) but is not suitable for whole plant labelling. Plants are usually autotrophic (fixing carbon from atmospheric CO2) and, thus, labelling with carbon isotopes becomes impractical. In addition, SILAC is expensive. Recently, Arabidopsis cell cultures were labelled with 15N in a medium containing nitrate as sole nitrogen source. This was shown to be suitable for quantifying proteins and nitrogen-containing metabolites from this cell culture (2,3). Labelling whole plants, however, offers the advantage of studying quantitatively the response to stimulation or disease of a whole multicellular organism or multi-organism systems at the molecular level. Furthermore, plant metabolism enables the use of inexpensive labelling media without introducing additional stress to the organism. And finally, hydroponics is ideal to undertake metabolic labelling under extremely well-controlled conditions. We demonstrate the suitability of metabolic 15N hydroponic isotope labelling of entire plants (HILEP) for relative quantitative proteomic analysis by mass spectrometry. To evaluate this methodology, Arabidopsis plants were grown hydroponically in 14N and 15N media and subjected to oxidative stress.
Resumo:
There are several advantages of using metabolic labeling in quantitative proteomics. The early pooling of samples compared to post-labeling methods eliminates errors from different sample processing, protein extraction and enzymatic digestion. Metabolic labeling is also highly efficient and relatively inexpensive compared to commercial labeling reagents. However, methods for multiplexed quantitation in the MS-domain (or ‘non-isobaric’ methods), suffer from signal dilution at higher degrees of multiplexing, as the MS/MS signal for peptide identification is lower given the same amount of peptide loaded onto the column or injected into the mass spectrometer. This may partly be overcome by mixing the samples at non-uniform ratios, for instance by increasing the fraction of unlabeled proteins. We have developed an algorithm for arbitrary degrees of nonisobaric multiplexing for relative protein abundance measurements. We have used metabolic labeling with different levels of 15N, but the algorithm is in principle applicable to any isotope or combination of isotopes. Ion trap mass spectrometers are fast and suitable for LC-MS/MS and peptide identification. However, they cannot resolve overlapping isotopic envelopes from different peptides, which makes them less suitable for MS-based quantitation. Fourier-transform ion cyclotron resonance (FTICR) mass spectrometry is less suitable for LC-MS/MS, but provides the resolving power required to resolve overlapping isotopic envelopes. We therefore combined ion trap LC-MS/MS for peptide identification with FTICR LC-MS for quantitation using chromatographic alignment. We applied the method in a heat shock study in a plant model system (A. thaliana) and compared the results with gene expression data from similar experiments in literature.
Resumo:
An important goal in computational neuroanatomy is the complete and accurate simulation of neuronal morphology. We are developing computational tools to model three-dimensional dendritic structures based on sets of stochastic rules. This paper reports an extensive, quantitative anatomical characterization of simulated motoneurons and Purkinje cells. We used several local and global algorithms implemented in the L-Neuron and ArborVitae programs to generate sets of virtual neurons. Parameters statistics for all algorithms were measured from experimental data, thus providing a compact and consistent description of these morphological classes. We compared the emergent anatomical features of each group of virtual neurons with those of the experimental database in order to gain insights on the plausibility of the model assumptions, potential improvements to the algorithms, and non-trivial relations among morphological parameters. Algorithms mainly based on local constraints (e.g., branch diameter) were successful in reproducing many morphological properties of both motoneurons and Purkinje cells (e.g. total length, asymmetry, number of bifurcations). The addition of global constraints (e.g., trophic factors) improved the angle-dependent emergent characteristics (average Euclidean distance from the soma to the dendritic terminations, dendritic spread). Virtual neurons systematically displayed greater anatomical variability than real cells, suggesting the need for additional constraints in the models. For several emergent anatomical properties, a specific algorithm reproduced the experimental statistics better than the others did. However, relative performances were often reversed for different anatomical properties and/or morphological classes. Thus, combining the strengths of alternative generative models could lead to comprehensive algorithms for the complete and accurate simulation of dendritic morphology.
Resumo:
Reconfigurable computing is becoming an important new alternative for implementing computations. Field programmable gate arrays (FPGAs) are the ideal integrated circuit technology to experiment with the potential benefits of using different strategies of circuit specialization by reconfiguration. The final form of the reconfiguration strategy is often non-trivial to determine. Consequently, in this paper, we examine strategies for reconfiguration and, based on our experience, propose general guidelines for the tradeoffs using an area-time metric called functional density. Three experiments are set up to explore different reconfiguration strategies for FPGAs applied to a systolic implementation of a scalar quantizer used as a case study. Quantitative results for each experiment are given. The regular nature of the example means that the results can be generalized to a wide class of industry-relevant problems based on arrays.
Resumo:
Two so-called “integrated” polarimetric rate estimation techniques, ZPHI (Testud et al., 2000) and ZZDR (Illingworth and Thompson, 2005), are evaluated using 12 episodes of the year 2005 observed by the French C-band operational Trappes radar, located near Paris. The term “integrated” means that the concentration parameter of the drop size distribution is assumed to be constant over some area and the algorithms retrieve it using the polarimetric variables in that area. The evaluation is carried out in ideal conditions (no partial beam blocking, no ground-clutter contamination, no bright band contamination, a posteriori calibration of the radar variables ZH and ZDR) using hourly rain gauges located at distances less than 60 km from the radar. Also included in the comparison, for the sake of benchmarking, is a conventional Z = 282R1.66 estimator, with and without attenuation correction and with and without adjustment by rain gauges as currently done operationally at Météo France. Under those ideal conditions, the two polarimetric algorithms, which rely solely on radar data, appear to perform as well if not better, pending on the measurements conditions (attenuation, rain rates, …), than the conventional algorithms, even when the latter take into account rain gauges through the adjustment scheme. ZZDR with attenuation correction is the best estimator for hourly rain gauge accumulations lower than 5 mm h−1 and ZPHI is the best one above that threshold. A perturbation analysis has been conducted to assess the sensitivity of the various estimators with respect to biases on ZH and ZDR, taking into account the typical accuracy and stability that can be reasonably achieved with modern operational radars these days (1 dB on ZH and 0.2 dB on ZDR). A +1 dB positive bias on ZH (radar too hot) results in a +14% overestimation of the rain rate with the conventional estimator used in this study (Z = 282R^1.66), a -19% underestimation with ZPHI and a +23% overestimation with ZZDR. Additionally, a +0.2 dB positive bias on ZDR results in a typical rain rate under- estimation of 15% by ZZDR.
Resumo:
There is a substantial literature which suggests that appraisals are smoothed and lag the true level of prices. This study combines a qualitative interview survey of the leading fund manager/owners in the UK and their appraisers with a empirical study of the number of appraisals which change each month within the IPD Monthly Index. The paper concentrates on how the appraisal process operates for commercial property performance measurement purposes. The survey interviews suggest that periodic appraisal services are consolidating in fewer firms and, within these major firms, appraisers adopt different approaches to changing appraisals on a period by period basis, with some wanting hard transaction evidence while others act on "softer' signals. The survey also indicates a seasonal effect with greater effort and information being applied to annual and quarterly appraisals than monthly. The analysis of the appraisals within the Investment Property Databank Monthly Index confirms this effect with around 5% more appraisals being moved at each quarter day than the other months. January and August have significantly less appraisal changes than other months.
Resumo:
Information on the genetic variation of plant response to elevated CO2 (e[CO2]) is needed to understand plant adaptation and to pinpoint likely evolutionary response to future high atmospheric CO2 concentrations.• Here, quantitative trait loci (QTL) for above- and below-ground tree growth were determined in a pedigree – an F2 hybrid of poplar (Populus trichocarpa and Populus deltoides), following season-long exposure to either current day ambient CO2 (a[CO2]) or e[CO2] at 600 µl l−1, and genotype by environment interactions investigated.• In the F2 generation, both above- and below-ground growth showed a significant increase in e[CO2]. Three areas of the genome on linkage groups I, IX and XII were identified as important in determining above-ground growth response to e[CO2], while an additional three areas of the genome on linkage groups IV, XVI and XIX appeared important in determining root growth response to e[CO2].• These results quantify and identify genetic variation in response to e[CO2] and provide an insight into genomic response to the changing environment
Resumo:
Quantitation is an inherent requirement in comparative proteomics and there is no exception to this for plant proteomics. Quantitative proteomics has high demands on the experimental workflow, requiring a thorough design and often a complex multi-step structure. It has to include sufficient numbers of biological and technical replicates and methods that are able to facilitate a quantitative signal read-out. Quantitative plant proteomics in particular poses many additional challenges but because of the nature of plants it also offers some potential advantages. In general, analysis of plants has been less prominent in proteomics. Low protein concentration, difficulties in protein extraction, genome multiploidy, high Rubisco abundance in green tissue, and an absence of well-annotated and completed genome sequences are some of the main challenges in plant proteomics. However, the latter is now changing with several genomes emerging for model plants and crops such as potato, tomato, soybean, rice, maize and barley. This review discusses the current status in quantitative plant proteomics (MS-based and non-MS-based) and its challenges and potentials. Both relative and absolute quantitation methods in plant proteomics from DIGE to MS-based analysis after isotope labeling and label-free quantitation are described and illustrated by published studies. In particular, we describe plant-specific quantitative methods such as metabolic labeling methods that can take full advantage of plant metabolism and culture practices, and discuss other potential advantages and challenges that may arise from the unique properties of plants.
Resumo:
There is a substantial literature which suggests that appraisals are smoothed and lag the true level of prices. This study combines a qualitative interview survey of the leading fund manager/owners in the UK and their appraisers with a empirical study of the number of appraisals which change each month within the IPD Monthly Index. The paper concentrates on how the appraisal process operates for commercial real estate performance measurement purposes. The survey interviews suggest that periodic appraisal services are consolidating in fewer firms and, within these major firms, appraisers adopt different approaches to changing appraisals on a period by period basis, with some wanting hard transaction evidence while others act on ‘softer’ signals. The survey also indicates a seasonal effect with greater effort and information being applied to annual and quarterly appraisals than monthly. The analysis of the appraisals within the IPD Monthly Index confirms this effect with around 5% more appraisals being moved at each quarter day than the other months. More November appraisals change than expected and this suggests that the increased information flows for the December end year appraisals are flowing through into earlier appraisals, especially as client/appraiser draft appraisal meetings for the December appraisals, a regular occurrence in the UK, can occur in November. January illustrates significantly less activity than other months, a seasonal effect after the exertions of the December appraisals.
Resumo:
In the year 2007 a General Observation Period (GOP) has been performed within the German Priority Program on Quantitative Precipitation Forecasting (PQP). By optimizing the use of existing instrumentation a large data set of in-situ and remote sensing instruments with special focus on water cycle variables was gathered over the full year cycle. The area of interest covered central Europe with increasing focus towards the Black Forest where the Convective and Orographically-induced Precipitation Study (COPS) took place from June to August 2007. Thus the GOP includes a variety of precipitation systems in order to relate the COPS results to a larger spatial scale. For a timely use of the data, forecasts of the numerical weather prediction models COSMO-EU and COSMO-DE of the German Meteorological Service were tailored to match the observations and perform model evaluation in a near real-time environment. The ultimate goal is to identify and distinguish between different kinds of model deficits and to improve process understanding.