71 resultados para quantitative fractography
Resumo:
Pulsed Phase Thermography (PPT) has been proven effective on depth retrieval of flat-bottomed holes in different materials such as plastics and aluminum. In PPT, amplitude and phase delay signatures are available following data acquisition (carried out in a similar way as in classical Pulsed Thermography), by applying a transformation algorithm such as the Fourier Transform (FT) on thermal profiles. The authors have recently presented an extended review on PPT theory, including a new inversion technique for depth retrieval by correlating the depth with the blind frequency fb (frequency at which a defect produce enough phase contrast to be detected). An automatic defect depth retrieval algorithm had also been proposed, evidencing PPT capabilities as a practical inversion technique. In addition, the use of normalized parameters to account for defect size variation as well as depth retrieval from complex shape composites (GFRP and CFRP) are currently under investigation. In this paper, steel plates containing flat-bottomed holes at different depths (from 1 to 4.5 mm) are tested by quantitative PPT. Least squares regression results show excellent agreement between depth and the inverse square root blind frequency, which can be used for depth inversion. Experimental results on steel plates with simulated corrosion are presented as well. It is worth noting that results are improved by performing PPT on reconstructed (synthetic) rather than on raw thermal data.
Resumo:
Prebiotics are nondigestible carbohydrates that beneficially affect the host by selectively stimulating the growth and/or activity of one, or a limited number of, bacteria present in the colon. The selected genera should have the capacity to improve host health (e.g. Bifidobacterium, Lactobacillus). To help identify preferred types, for inclusion into the diet, a quantitative equation [measure of the prebiotic effect (MPE)] is suggested. This will help evaluate, in vitro, the fermentation of dietary carbohydrates and compare their prebiotic effect. Although the approach is not meant to define health values, it is formulated to better inform the choice of prebiotic. It therefore, compares measurements of bacterial changes through the determination of maximum growth rates of predominant groups present in faeces, rate of substrate assimilation and the production of lactic, acetic, propionic and butyric acids. The equation will allow further in vitro comparisons of MPE, leading towards further studies (e.g. in humans) to determine the success of dietary intervention. (C) 2004 Federation of European Microbiological Societies. Published by Elsevier B.V. All rights reserved.
Resumo:
Fluorophos and colourimetric procedures for alkaline phosphatase (ALP) testing were compared using milk with raw milk additions, purified bovine ALP additions and heat treatments. Repeatability was between 0.9% and 10.1% for Fluorophos, 3.5% and 46.1% for the Aschaffenburg and Mullen (A&M) procedure and 4.4% and 8.8% for the Scharer rapid test. Linearity (R-2) using raw milk addition was 0.96 between Fluorophos and the Scharer procedure. Between the Fluorophos and the A&M procedures, R-2 values were 0.98, 0.99 and 0.98 for raw milk additions, bovine ALP additions and heat treatments respectively. Fluorophos showed greater sensitivity and was both faster and simpler to perform.
Resumo:
Aims: To develop a quantitative equation [prebiotic index ( PI)] to aid the analysis of prebiotic fermentation of commercially available and novel prebiotic carbohydrates in vitro, using previously published fermentation data. Methods: The PI equation is based on the changes in key bacterial groups during fermentation. The bacterial groups incorporated into this PI equation were bifidobacteria, lactobacilli, clostridia and bacteroides. The changes in these bacterial groups from previous studies were entered into the PI equation in order to determine a quantitative PI score. PI scores were than compared with the qualitative conclusions made in these publications. In general the PI scores agreed with the qualitative conclusions drawn and provided a quantitative measure. Conclusions: The PI allows the magnitude of prebiotic effects to be quantified rather than evaluations being solely qualitative. Significance and Impact of the Study: The PI equation may be of great use in quantifying prebiotic effects in vitro. It is expected that this will facilitate more rational food product development and the development of more potent prebiotics with activity at lower doses.
Resumo:
An increasing number of neuroscience experiments are using virtual reality to provide a more immersive and less artificial experimental environment. This is particularly useful to navigation and three-dimensional scene perception experiments. Such experiments require accurate real-time tracking of the observer's head in order to render the virtual scene. Here, we present data on the accuracy of a commonly used six degrees of freedom tracker (Intersense IS900) when it is moved in ways typical of virtual reality applications. We compared the reported location of the tracker with its location computed by an optical tracking method. When the tracker was stationary, the root mean square error in spatial accuracy was 0.64 mm. However, we found that errors increased over ten-fold (up to 17 mm) when the tracker moved at speeds common in virtual reality applications. We demonstrate that the errors we report here are predominantly due to inaccuracies of the IS900 system rather than the optical tracking against which it was compared. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The self-consistent field theory (SCFT) prediction for the compression force between two semi-dilute polymer brushes is compared to the benchmark experiments of Taunton et al. [Nature, 1988, 332, 712]. The comparison is done with previously established parameters, and without any fitting parameters whatsoever. The SCFT provides a significant quantitative improvement over the classical strong-stretching theory (SST), yielding excellent quantitative agreement with the experiment. Contrary to earlier suggestions, chain fluctuations cannot be ignored for normal experimental conditions. Although the analytical expressions of SST provide invaluable aids to understanding the qualitative behavior of polymeric brushes, the numerical SCFT is necessary in order to provide quantitatively accurate predictions.
Resumo:
The success of Matrix-assisted laser desorption / ionisation (MALDI) in fields such as proteomics has partially but not exclusively been due to the development of improved data acquisition and sample preparation techniques. This has been required to overcome some of the short comings of the commonly used solid-state MALDI matrices such as - cyano-4-hydroxycinnamic acid (CHCA) and 2,5-dihydroxybenzoic acid (DHB). Solid state matrices form crystalline samples with highly inhomogeneous topography and morphology which results in large fluctuations in analyte signal intensity from spot to spot and positions within the spot. This means that efficient tuning of the mass spectrometer can be impeded and the use of MALDI MS for quantitative measurements is severely impeded. Recently new MALDI liquid matrices have been introduced which promise to be an effective alternative to crystalline matrices. Generally the liquid matrices comprise either ionic liquid matrices (ILMs) or a usually viscous liquid matrix which is doped with a UV lightabsorbing chromophore [1-3]. The advantages are that the droplet surface is smooth and relatively uniform with the analyte homogeneously distributed within. They have the ability to replenish a sampling position between shots negating the need to search for sample hot-spots. Also the liquid nature of the matrix allows for the use of additional additives to change the environment to which the analyte is added.
Resumo:
Quantitative analysis by mass spectrometry (MS) is a major challenge in proteomics as the correlation between analyte concentration and signal intensity is often poor due to varying ionisation efficiencies in the presence of molecular competitors. However, relative quantitation methods that utilise differential stable isotope labelling and mass spectrometric detection are available. Many drawbacks inherent to chemical labelling methods (ICAT, iTRAQ) can be overcome by metabolic labelling with amino acids containing stable isotopes (e.g. 13C and/or 15N) in methods such as Stable Isotope Labelling with Amino acids in Cell culture (SILAC). SILAC has also been used for labelling of proteins in plant cell cultures (1) but is not suitable for whole plant labelling. Plants are usually autotrophic (fixing carbon from atmospheric CO2) and, thus, labelling with carbon isotopes becomes impractical. In addition, SILAC is expensive. Recently, Arabidopsis cell cultures were labelled with 15N in a medium containing nitrate as sole nitrogen source. This was shown to be suitable for quantifying proteins and nitrogen-containing metabolites from this cell culture (2,3). Labelling whole plants, however, offers the advantage of studying quantitatively the response to stimulation or disease of a whole multicellular organism or multi-organism systems at the molecular level. Furthermore, plant metabolism enables the use of inexpensive labelling media without introducing additional stress to the organism. And finally, hydroponics is ideal to undertake metabolic labelling under extremely well-controlled conditions. We demonstrate the suitability of metabolic 15N hydroponic isotope labelling of entire plants (HILEP) for relative quantitative proteomic analysis by mass spectrometry. To evaluate this methodology, Arabidopsis plants were grown hydroponically in 14N and 15N media and subjected to oxidative stress.
Resumo:
There are several advantages of using metabolic labeling in quantitative proteomics. The early pooling of samples compared to post-labeling methods eliminates errors from different sample processing, protein extraction and enzymatic digestion. Metabolic labeling is also highly efficient and relatively inexpensive compared to commercial labeling reagents. However, methods for multiplexed quantitation in the MS-domain (or ‘non-isobaric’ methods), suffer from signal dilution at higher degrees of multiplexing, as the MS/MS signal for peptide identification is lower given the same amount of peptide loaded onto the column or injected into the mass spectrometer. This may partly be overcome by mixing the samples at non-uniform ratios, for instance by increasing the fraction of unlabeled proteins. We have developed an algorithm for arbitrary degrees of nonisobaric multiplexing for relative protein abundance measurements. We have used metabolic labeling with different levels of 15N, but the algorithm is in principle applicable to any isotope or combination of isotopes. Ion trap mass spectrometers are fast and suitable for LC-MS/MS and peptide identification. However, they cannot resolve overlapping isotopic envelopes from different peptides, which makes them less suitable for MS-based quantitation. Fourier-transform ion cyclotron resonance (FTICR) mass spectrometry is less suitable for LC-MS/MS, but provides the resolving power required to resolve overlapping isotopic envelopes. We therefore combined ion trap LC-MS/MS for peptide identification with FTICR LC-MS for quantitation using chromatographic alignment. We applied the method in a heat shock study in a plant model system (A. thaliana) and compared the results with gene expression data from similar experiments in literature.
Resumo:
An important goal in computational neuroanatomy is the complete and accurate simulation of neuronal morphology. We are developing computational tools to model three-dimensional dendritic structures based on sets of stochastic rules. This paper reports an extensive, quantitative anatomical characterization of simulated motoneurons and Purkinje cells. We used several local and global algorithms implemented in the L-Neuron and ArborVitae programs to generate sets of virtual neurons. Parameters statistics for all algorithms were measured from experimental data, thus providing a compact and consistent description of these morphological classes. We compared the emergent anatomical features of each group of virtual neurons with those of the experimental database in order to gain insights on the plausibility of the model assumptions, potential improvements to the algorithms, and non-trivial relations among morphological parameters. Algorithms mainly based on local constraints (e.g., branch diameter) were successful in reproducing many morphological properties of both motoneurons and Purkinje cells (e.g. total length, asymmetry, number of bifurcations). The addition of global constraints (e.g., trophic factors) improved the angle-dependent emergent characteristics (average Euclidean distance from the soma to the dendritic terminations, dendritic spread). Virtual neurons systematically displayed greater anatomical variability than real cells, suggesting the need for additional constraints in the models. For several emergent anatomical properties, a specific algorithm reproduced the experimental statistics better than the others did. However, relative performances were often reversed for different anatomical properties and/or morphological classes. Thus, combining the strengths of alternative generative models could lead to comprehensive algorithms for the complete and accurate simulation of dendritic morphology.
Resumo:
Reconfigurable computing is becoming an important new alternative for implementing computations. Field programmable gate arrays (FPGAs) are the ideal integrated circuit technology to experiment with the potential benefits of using different strategies of circuit specialization by reconfiguration. The final form of the reconfiguration strategy is often non-trivial to determine. Consequently, in this paper, we examine strategies for reconfiguration and, based on our experience, propose general guidelines for the tradeoffs using an area-time metric called functional density. Three experiments are set up to explore different reconfiguration strategies for FPGAs applied to a systolic implementation of a scalar quantizer used as a case study. Quantitative results for each experiment are given. The regular nature of the example means that the results can be generalized to a wide class of industry-relevant problems based on arrays.
Resumo:
Two so-called “integrated” polarimetric rate estimation techniques, ZPHI (Testud et al., 2000) and ZZDR (Illingworth and Thompson, 2005), are evaluated using 12 episodes of the year 2005 observed by the French C-band operational Trappes radar, located near Paris. The term “integrated” means that the concentration parameter of the drop size distribution is assumed to be constant over some area and the algorithms retrieve it using the polarimetric variables in that area. The evaluation is carried out in ideal conditions (no partial beam blocking, no ground-clutter contamination, no bright band contamination, a posteriori calibration of the radar variables ZH and ZDR) using hourly rain gauges located at distances less than 60 km from the radar. Also included in the comparison, for the sake of benchmarking, is a conventional Z = 282R1.66 estimator, with and without attenuation correction and with and without adjustment by rain gauges as currently done operationally at Météo France. Under those ideal conditions, the two polarimetric algorithms, which rely solely on radar data, appear to perform as well if not better, pending on the measurements conditions (attenuation, rain rates, …), than the conventional algorithms, even when the latter take into account rain gauges through the adjustment scheme. ZZDR with attenuation correction is the best estimator for hourly rain gauge accumulations lower than 5 mm h−1 and ZPHI is the best one above that threshold. A perturbation analysis has been conducted to assess the sensitivity of the various estimators with respect to biases on ZH and ZDR, taking into account the typical accuracy and stability that can be reasonably achieved with modern operational radars these days (1 dB on ZH and 0.2 dB on ZDR). A +1 dB positive bias on ZH (radar too hot) results in a +14% overestimation of the rain rate with the conventional estimator used in this study (Z = 282R^1.66), a -19% underestimation with ZPHI and a +23% overestimation with ZZDR. Additionally, a +0.2 dB positive bias on ZDR results in a typical rain rate under- estimation of 15% by ZZDR.