10 resultados para Normalisation

em CentAUR: Central Archive University of Reading - UK


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Laboratory determined mineral weathering rates need to be normalised to allow their extrapolation to natural systems. The principle normalisation terms used in the literature are mass, and geometric- and BET specific surface area (SSA). The purpose of this study was to determine how dissolution rates normalised to these terms vary with grain size. Different size fractions of anorthite and biotite ranging from 180-150 to 20-10 mu m were dissolved in pH 3, HCl at 25 degrees C in flow through reactors under far from equilibrium conditions. Steady state dissolution rates after 5376 h (anorthite) and 4992 h (biotite) were calculated from Si concentrations and were normalised to initial- and final- mass and geometric-, geometric edge- (biotite), and BET SSA. For anorthite, rates normalised to initial- and final-BET SSA ranged from 0.33 to 2.77 X 10(-10) mol(feldspar) m(-2) s(-1), rates normalised to initial- and final-geometric SSA ranged from 5.74 to 8.88 X 10(-10) mol(feldspar) m(-2) s(-1) and rates normalised to initial- and final-mass ranged from 0.11 to 1.65 mol(feldspar) g(-1) s(-1). For biotite, rates normalised to initial- and final-BET SSA ranged from 1.02 to 2.03 X 10(-12) mol(biotite) m(-2) s(-1), rates normalised to initial- and final-geometric SSA ranged from 3.26 to 16.21 X 10(-12) mol(biotite) m(-2) s(-1), rates normalised to initial- and final-geometric edge SSA ranged from 59.46 to 111.32 x 10(-12) mol(biotite) m(-2) s(-1) and rates normalised to initial- and final-mass ranged from 0.81 to 6.93 X 10(-12) mol(biotite) g(-1) s(-1). For all normalising terms rates varied significantly (p <= 0.05) with grain size. The normalising terms which gave least variation in dissolution rate between grain sizes for anorthite were initial BET SSA and initial- and final-geometric SSA. This is consistent with: (1) dissolution being dominated by the slower dissolving but area dominant non-etched surfaces of the grains and, (2) the walls of etch pits and other dissolution features being relatively unreactive. These steady state normalised dissolution rates are likely to be constant with time. Normalisation to final BET SSA did not give constant ratios across grain size due to a non-uniform distribution of dissolution features. After dissolution coarser grains had a greater density of dissolution features with BET-measurable but unreactive wall surface area than the finer grains. The normalising term which gave the least variation in dissolution rates between grain sizes for biotite was initial BET SSA. Initial- and final-geometric edge SSA and final BET SSA gave the next least varied rates. The basal surfaces dissolved sufficiently rapidly to influence bulk dissolution rate and prevent geometric edge SSA normalised dissolution rates showing the least variation. Simple modelling indicated that biotite grain edges dissolved 71-132 times faster than basal surfaces. In this experiment, initial BET SSA best integrated the different areas and reactivities of the edge and basal surfaces of biotite. Steady state dissolution rates are likely to vary with time as dissolution alters the ratio of edge to basal surface area. Therefore they would be more properly termed pseudo-steady state rates, only appearing constant because the time period over which they were measured (1512 h) was less than the time period over wich they would change significantly. (c) 2006 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Transcriptomic techniques are now being applied in ecotoxicology and toxicology to measure the impact of stressors and develop understanding of mechanisms of toxicity. Microarray technology in particular offers the potential to measure thousands of gene responses simultaneously. However, it is important that microarrays responses should be validated, at least initially, using real-time quantitative polymerase chain reaction (QPCR). The accurate measurement of target gene expression requires normalisation to an invariant internal control e. g., total RNA or reference genes. Reference genes are preferable, as they control for variation inherent in the cDNA synthesis and PCR. However, reference gene expression can vary between tissues and experimental conditions, which makes it crucial to validate them prior to application. Results: We evaluated 10 candidate reference genes for QPCR in Daphnia magna following a 24 h exposure to the non-steroidal anti-inflammatory drug (NSAID) ibuprofen (IB) at 0, 20, 40 and 80 mg IB l(-1). Six of the 10 candidates appeared suitable for use as reference genes. As a robust approach, we used a combination normalisation factor (NF), calculated using the geNorm application, based on the geometric mean of three selected reference genes: glyceraldehyde-3-phosphate dehydrogenase, ubiquitin conjugating enzyme and actin. The effects of normalisation are illustrated using as target gene leukotriene B4 12-hydroxydehydrogenase (Ltb4dh), which was upregulated following 24 h exposure to 63-81 mg IB l(-1). Conclusions: As anticipated, use of the NF clarified the response of Ltb4dh in daphnids exposed to sublethal levels of ibuprofen. Our findings emphasise the importance in toxicogenomics of finding and applying invariant internal QPCR control(s) relevant to the study conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this study was to determine the incidence of prostate cancer in patients who have an elevated referral prostate-specific antigen (PSA), which subsequently falls to within their normal age-specific reference range prior to prostate biopsy. The study demonstrated that of the 160 patients recruited, 21 (13%) had a repeat PSA level which had fallen back to within their normal range. Five of these 21 patients (24%) were diagnosed with prostate cancer following biopsy, two of whom had a benign prostate examination. The study, therefore, demonstrates that normalisation of the PSA level prior to biopsy does not exclude the presence of prostate cancer even when the prostate feels benign.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Individuals with Williams syndrome (WS) demonstrate impaired visuo-spatial abilities in comparison to their level of verbal ability. In particular, visuo-spatial construction is an area of relative weakness. It has been hypothesised that poor or atypical location coding abilities contribute strongly to the impaired abilities observed on construction and drawing tasks [Farran, E. K., & Jarrold, C. (2005). Evidence for unusual spatial location coding in Williams syndrome: An explanation for the local bias in visuo-spatial construction tasks? Brain and Cognition, 59, 159-172; Hoffman, J. E., Landau, B., & Pagani, B. (2003). Spatial breakdown in spatial construction: Evidence from eye fixations in children with Williams syndrome. Cognitive Psychology, 46, 260-301]. The current experiment investigated location memory in WS. Specifically, the precision of remembered locations was measured as well as the biases and strategies that were involved in remembering those locations. A developmental trajectory approach was employed; WS performance was assessed relative to the performance of typically developing (TD) children ranging from 4- to 8-year-old. Results showed differential strategy use in the WS and TD groups. WS performance was most similar to the level of a TD 4-year-old and was additionally impaired by the addition of physical category boundaries. Despite their low level of ability, the WS group produced a pattern of biases in performance which pointed towards evidence of a subdivision effect, as observed in TD older children and adults. In contrast, the TD children showed a different pattern of biases, which appears to be explained by a normalisation strategy. In summary, individuals with WS do not process locations in a typical manner. This may have a negative impact on their visuo-spatial construction and drawing abilities. (c) 2007 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Microarray based comparative genomic hybridisation (CGH) experiments have been used to study numerous biological problems including understanding genome plasticity in pathogenic bacteria. Typically such experiments produce large data sets that are difficult for biologists to handle. Although there are some programmes available for interpretation of bacterial transcriptomics data and CGH microarray data for looking at genetic stability in oncogenes, there are none specifically to understand the mosaic nature of bacterial genomes. Consequently a bottle neck still persists in accurate processing and mathematical analysis of these data. To address this shortfall we have produced a simple and robust CGH microarray data analysis process that may be automated in the future to understand bacterial genomic diversity. Results: The process involves five steps: cleaning, normalisation, estimating gene presence and absence or divergence, validation, and analysis of data from test against three reference strains simultaneously. Each stage of the process is described and we have compared a number of methods available for characterising bacterial genomic diversity, for calculating the cut-off between gene presence and absence or divergence, and shown that a simple dynamic approach using a kernel density estimator performed better than both established, as well as a more sophisticated mixture modelling technique. We have also shown that current methods commonly used for CGH microarray analysis in tumour and cancer cell lines are not appropriate for analysing our data. Conclusion: After carrying out the analysis and validation for three sequenced Escherichia coli strains, CGH microarray data from 19 E. coli O157 pathogenic test strains were used to demonstrate the benefits of applying this simple and robust process to CGH microarray studies using bacterial genomes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent evidence suggests that an area in the dorsal medial prefrontal cortex (dorsal nexus) shows dramatic increases in connectivity across a network of brain regions in depressed patients during the resting state;1 this increase in connectivity is suggested to represent hotwiring of areas involved in disparate cognitive and emotional functions.1, 2, 3 Sheline et al.1 concluded that antidepressant action may involve normalisation of the elevated resting state functional connectivity seen in depressed patients. However, the effects of conventional pharmacotherapy for depression on this resting state functional connectivity is not known and the effects of antidepressant treatment in depressed patients may be confounded by change in symptoms following treatment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is an on-going debate on the environmental effects of genetically modified crops to which this paper aims to contribute. First, data on environmental impacts of genetically modified (GM) and conventional crops are collected from peer-reviewed journals, and secondly an analysis is conducted in order to examine which crop type is less harmful for the environment. Published data on environmental impacts are measured using an array of indicators, and their analysis requires their normalisation and aggregation. Taking advantage of composite indicators literature, this paper builds composite indicators to measure the impact of GM and conventional crops in three dimensions: (1) non-target key species richness, (2) pesticide use, and (3) aggregated environmental impact. The comparison between the three composite indicators for both crop types allows us to establish not only a ranking to elucidate which crop is more convenient for the environment but the probability that one crop type outperforms the other from an environmental perspective. Results show that GM crops tend to cause lower environmental impacts than conventional crops for the analysed indicators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Anti-spoofing is attracting growing interest in biometrics, considering the variety of fake materials and new means to attack biometric recognition systems. New unseen materials continuously challenge state-of-the-art spoofing detectors, suggesting for additional systematic approaches to target anti-spoofing. By incorporating liveness scores into the biometric fusion process, recognition accuracy can be enhanced, but traditional sum-rule based fusion algorithms are known to be highly sensitive to single spoofed instances. This paper investigates 1-median filtering as a spoofing-resistant generalised alternative to the sum-rule targeting the problem of partial multibiometric spoofing where m out of n biometric sources to be combined are attacked. Augmenting previous work, this paper investigates the dynamic detection and rejection of livenessrecognition pair outliers for spoofed samples in true multi-modal configuration with its inherent challenge of normalisation. As a further contribution, bootstrap aggregating (bagging) classifiers for fingerprint spoof-detection algorithm is presented. Experiments on the latest face video databases (Idiap Replay- Attack Database and CASIA Face Anti-Spoofing Database), and fingerprint spoofing database (Fingerprint Liveness Detection Competition 2013) illustrate the efficiency of proposed techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates the potential of fusion at normalisation/segmentation level prior to feature extraction. While there are several biometric fusion methods at data/feature level, score level and rank/decision level combining raw biometric signals, scores, or ranks/decisions, this type of fusion is still in its infancy. However, the increasing demand to allow for more relaxed and less invasive recording conditions, especially for on-the-move iris recognition, suggests to further investigate fusion at this very low level. This paper focuses on the approach of multi-segmentation fusion for iris biometric systems investigating the benefit of combining the segmentation result of multiple normalisation algorithms, using four methods from two different public iris toolkits (USIT, OSIRIS) on the public CASIA and IITD iris datasets. Evaluations based on recognition accuracy and ground truth segmentation data indicate high sensitivity with regards to the type of errors made by segmentation algorithms.