952 resultados para Iris normalisation
Resumo:
Laboratory determined mineral weathering rates need to be normalised to allow their extrapolation to natural systems. The principle normalisation terms used in the literature are mass, and geometric- and BET specific surface area (SSA). The purpose of this study was to determine how dissolution rates normalised to these terms vary with grain size. Different size fractions of anorthite and biotite ranging from 180-150 to 20-10 mu m were dissolved in pH 3, HCl at 25 degrees C in flow through reactors under far from equilibrium conditions. Steady state dissolution rates after 5376 h (anorthite) and 4992 h (biotite) were calculated from Si concentrations and were normalised to initial- and final- mass and geometric-, geometric edge- (biotite), and BET SSA. For anorthite, rates normalised to initial- and final-BET SSA ranged from 0.33 to 2.77 X 10(-10) mol(feldspar) m(-2) s(-1), rates normalised to initial- and final-geometric SSA ranged from 5.74 to 8.88 X 10(-10) mol(feldspar) m(-2) s(-1) and rates normalised to initial- and final-mass ranged from 0.11 to 1.65 mol(feldspar) g(-1) s(-1). For biotite, rates normalised to initial- and final-BET SSA ranged from 1.02 to 2.03 X 10(-12) mol(biotite) m(-2) s(-1), rates normalised to initial- and final-geometric SSA ranged from 3.26 to 16.21 X 10(-12) mol(biotite) m(-2) s(-1), rates normalised to initial- and final-geometric edge SSA ranged from 59.46 to 111.32 x 10(-12) mol(biotite) m(-2) s(-1) and rates normalised to initial- and final-mass ranged from 0.81 to 6.93 X 10(-12) mol(biotite) g(-1) s(-1). For all normalising terms rates varied significantly (p <= 0.05) with grain size. The normalising terms which gave least variation in dissolution rate between grain sizes for anorthite were initial BET SSA and initial- and final-geometric SSA. This is consistent with: (1) dissolution being dominated by the slower dissolving but area dominant non-etched surfaces of the grains and, (2) the walls of etch pits and other dissolution features being relatively unreactive. These steady state normalised dissolution rates are likely to be constant with time. Normalisation to final BET SSA did not give constant ratios across grain size due to a non-uniform distribution of dissolution features. After dissolution coarser grains had a greater density of dissolution features with BET-measurable but unreactive wall surface area than the finer grains. The normalising term which gave the least variation in dissolution rates between grain sizes for biotite was initial BET SSA. Initial- and final-geometric edge SSA and final BET SSA gave the next least varied rates. The basal surfaces dissolved sufficiently rapidly to influence bulk dissolution rate and prevent geometric edge SSA normalised dissolution rates showing the least variation. Simple modelling indicated that biotite grain edges dissolved 71-132 times faster than basal surfaces. In this experiment, initial BET SSA best integrated the different areas and reactivities of the edge and basal surfaces of biotite. Steady state dissolution rates are likely to vary with time as dissolution alters the ratio of edge to basal surface area. Therefore they would be more properly termed pseudo-steady state rates, only appearing constant because the time period over which they were measured (1512 h) was less than the time period over wich they would change significantly. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
Background: Transcriptomic techniques are now being applied in ecotoxicology and toxicology to measure the impact of stressors and develop understanding of mechanisms of toxicity. Microarray technology in particular offers the potential to measure thousands of gene responses simultaneously. However, it is important that microarrays responses should be validated, at least initially, using real-time quantitative polymerase chain reaction (QPCR). The accurate measurement of target gene expression requires normalisation to an invariant internal control e. g., total RNA or reference genes. Reference genes are preferable, as they control for variation inherent in the cDNA synthesis and PCR. However, reference gene expression can vary between tissues and experimental conditions, which makes it crucial to validate them prior to application. Results: We evaluated 10 candidate reference genes for QPCR in Daphnia magna following a 24 h exposure to the non-steroidal anti-inflammatory drug (NSAID) ibuprofen (IB) at 0, 20, 40 and 80 mg IB l(-1). Six of the 10 candidates appeared suitable for use as reference genes. As a robust approach, we used a combination normalisation factor (NF), calculated using the geNorm application, based on the geometric mean of three selected reference genes: glyceraldehyde-3-phosphate dehydrogenase, ubiquitin conjugating enzyme and actin. The effects of normalisation are illustrated using as target gene leukotriene B4 12-hydroxydehydrogenase (Ltb4dh), which was upregulated following 24 h exposure to 63-81 mg IB l(-1). Conclusions: As anticipated, use of the NF clarified the response of Ltb4dh in daphnids exposed to sublethal levels of ibuprofen. Our findings emphasise the importance in toxicogenomics of finding and applying invariant internal QPCR control(s) relevant to the study conditions.
Resumo:
The purpose of this study was to determine the incidence of prostate cancer in patients who have an elevated referral prostate-specific antigen (PSA), which subsequently falls to within their normal age-specific reference range prior to prostate biopsy. The study demonstrated that of the 160 patients recruited, 21 (13%) had a repeat PSA level which had fallen back to within their normal range. Five of these 21 patients (24%) were diagnosed with prostate cancer following biopsy, two of whom had a benign prostate examination. The study, therefore, demonstrates that normalisation of the PSA level prior to biopsy does not exclude the presence of prostate cancer even when the prostate feels benign.
Resumo:
Individuals with Williams syndrome (WS) demonstrate impaired visuo-spatial abilities in comparison to their level of verbal ability. In particular, visuo-spatial construction is an area of relative weakness. It has been hypothesised that poor or atypical location coding abilities contribute strongly to the impaired abilities observed on construction and drawing tasks [Farran, E. K., & Jarrold, C. (2005). Evidence for unusual spatial location coding in Williams syndrome: An explanation for the local bias in visuo-spatial construction tasks? Brain and Cognition, 59, 159-172; Hoffman, J. E., Landau, B., & Pagani, B. (2003). Spatial breakdown in spatial construction: Evidence from eye fixations in children with Williams syndrome. Cognitive Psychology, 46, 260-301]. The current experiment investigated location memory in WS. Specifically, the precision of remembered locations was measured as well as the biases and strategies that were involved in remembering those locations. A developmental trajectory approach was employed; WS performance was assessed relative to the performance of typically developing (TD) children ranging from 4- to 8-year-old. Results showed differential strategy use in the WS and TD groups. WS performance was most similar to the level of a TD 4-year-old and was additionally impaired by the addition of physical category boundaries. Despite their low level of ability, the WS group produced a pattern of biases in performance which pointed towards evidence of a subdivision effect, as observed in TD older children and adults. In contrast, the TD children showed a different pattern of biases, which appears to be explained by a normalisation strategy. In summary, individuals with WS do not process locations in a typical manner. This may have a negative impact on their visuo-spatial construction and drawing abilities. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, practical generation of identification keys for biological taxa using a multilayer perceptron neural network is described. Unlike conventional expert systems, this method does not require an expert for key generation, but is merely based on recordings of observed character states. Like a human taxonomist, its judgement is based on experience, and it is therefore capable of generalized identification of taxa. An initial study involving identification of three species of Iris with greater than 90% confidence is presented here. In addition, the horticulturally significant genus Lithops (Aizoaceae/Mesembryanthemaceae), popular with enthusiasts of succulent plants, is used as a more practical example, because of the difficulty of generation of a conventional key to species, and the existence of a relatively recent monograph. It is demonstrated that such an Artificial Neural Network Key (ANNKEY) can identify more than half (52.9%) of the species in this genus, after training with representative data, even though data for one character is completely missing.
Resumo:
There is a growing concern in reducing greenhouse gas emissions all over the world. The U.K. has set 34% target reduction of emission before 2020 and 80% before 2050 compared to 1990 recently in Post Copenhagen Report on Climate Change. In practise, Life Cycle Cost (LCC) and Life Cycle Assessment (LCA) tools have been introduced to construction industry in order to achieve this such as. However, there is clear a disconnection between costs and environmental impacts over the life cycle of a built asset when using these two tools. Besides, the changes in Information and Communication Technologies (ICTs) lead to a change in the way information is represented, in particular, information is being fed more easily and distributed more quickly to different stakeholders by the use of tool such as the Building Information Modelling (BIM), with little consideration on incorporating LCC and LCA and their maximised usage within the BIM environment. The aim of this paper is to propose the development of a model-based LCC and LCA tool in order to provide sustainable building design decisions for clients, architects and quantity surveyors, by then an optimal investment decision can be made by studying the trade-off between costs and environmental impacts. An application framework is also proposed finally as the future work that shows how the proposed model can be incorporated into the BIM environment in practise.
Resumo:
Background: Microarray based comparative genomic hybridisation (CGH) experiments have been used to study numerous biological problems including understanding genome plasticity in pathogenic bacteria. Typically such experiments produce large data sets that are difficult for biologists to handle. Although there are some programmes available for interpretation of bacterial transcriptomics data and CGH microarray data for looking at genetic stability in oncogenes, there are none specifically to understand the mosaic nature of bacterial genomes. Consequently a bottle neck still persists in accurate processing and mathematical analysis of these data. To address this shortfall we have produced a simple and robust CGH microarray data analysis process that may be automated in the future to understand bacterial genomic diversity. Results: The process involves five steps: cleaning, normalisation, estimating gene presence and absence or divergence, validation, and analysis of data from test against three reference strains simultaneously. Each stage of the process is described and we have compared a number of methods available for characterising bacterial genomic diversity, for calculating the cut-off between gene presence and absence or divergence, and shown that a simple dynamic approach using a kernel density estimator performed better than both established, as well as a more sophisticated mixture modelling technique. We have also shown that current methods commonly used for CGH microarray analysis in tumour and cancer cell lines are not appropriate for analysing our data. Conclusion: After carrying out the analysis and validation for three sequenced Escherichia coli strains, CGH microarray data from 19 E. coli O157 pathogenic test strains were used to demonstrate the benefits of applying this simple and robust process to CGH microarray studies using bacterial genomes.
Resumo:
Recent evidence suggests that an area in the dorsal medial prefrontal cortex (dorsal nexus) shows dramatic increases in connectivity across a network of brain regions in depressed patients during the resting state;1 this increase in connectivity is suggested to represent hotwiring of areas involved in disparate cognitive and emotional functions.1, 2, 3 Sheline et al.1 concluded that antidepressant action may involve normalisation of the elevated resting state functional connectivity seen in depressed patients. However, the effects of conventional pharmacotherapy for depression on this resting state functional connectivity is not known and the effects of antidepressant treatment in depressed patients may be confounded by change in symptoms following treatment.
Resumo:
Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.
Resumo:
In this paper a modified algorithm is suggested for developing polynomial neural network (PNN) models. Optimal partial description (PD) modeling is introduced at each layer of the PNN expansion, a task accomplished using the orthogonal least squares (OLS) method. Based on the initial PD models determined by the polynomial order and the number of PD inputs, OLS selects the most significant regressor terms reducing the output error variance. The method produces PNN models exhibiting a high level of accuracy and superior generalization capabilities. Additionally, parsimonious models are obtained comprising a considerably smaller number of parameters compared to the ones generated by means of the conventional PNN algorithm. Three benchmark examples are elaborated, including modeling of the gas furnace process as well as the iris and wine classification problems. Extensive simulation results and comparison with other methods in the literature, demonstrate the effectiveness of the suggested modeling approach.
Resumo:
The INSIG2 rs7566605 polymorphism was identified for obesity (BMI> or =30 kg/m(2)) in one of the first genome-wide association studies, but replications were inconsistent. We collected statistics from 34 studies (n = 74,345), including general population (GP) studies, population-based studies with subjects selected for conditions related to a better health status ('healthy population', HP), and obesity studies (OB). We tested five hypotheses to explore potential sources of heterogeneity. The meta-analysis of 27 studies on Caucasian adults (n = 66,213) combining the different study designs did not support overall association of the CC-genotype with obesity, yielding an odds ratio (OR) of 1.05 (p-value = 0.27). The I(2) measure of 41% (p-value = 0.015) indicated between-study heterogeneity. Restricting to GP studies resulted in a declined I(2) measure of 11% (p-value = 0.33) and an OR of 1.10 (p-value = 0.015). Regarding the five hypotheses, our data showed (a) some difference between GP and HP studies (p-value = 0.012) and (b) an association in extreme comparisons (BMI> or =32.5, 35.0, 37.5, 40.0 kg/m(2) versus BMI<25 kg/m(2)) yielding ORs of 1.16, 1.18, 1.22, or 1.27 (p-values 0.001 to 0.003), which was also underscored by significantly increased CC-genotype frequencies across BMI categories (10.4% to 12.5%, p-value for trend = 0.0002). We did not find evidence for differential ORs (c) among studies with higher than average obesity prevalence compared to lower, (d) among studies with BMI assessment after the year 2000 compared to those before, or (e) among studies from older populations compared to younger. Analysis of non-Caucasian adults (n = 4889) or children (n = 3243) yielded ORs of 1.01 (p-value = 0.94) or 1.15 (p-value = 0.22), respectively. There was no evidence for overall association of the rs7566605 polymorphism with obesity. Our data suggested an association with extreme degrees of obesity, and consequently heterogeneous effects from different study designs may mask an underlying association when unaccounted for. The importance of study design might be under-recognized in gene discovery and association replication so far.
Resumo:
Common variants at only two loci, FTO and MC4R, have been reproducibly associated with body mass index (BMI) in humans. To identify additional loci, we conducted meta-analysis of 15 genome-wide association studies for BMI (n > 32,000) and followed up top signals in 14 additional cohorts (n > 59,000). We strongly confirm FTO and MC4R and identify six additional loci (P < 5 x 10(-8)): TMEM18, KCTD15, GNPDA2, SH2B1, MTCH2 and NEGR1 (where a 45-kb deletion polymorphism is a candidate causal variant). Several of the likely causal genes are highly expressed or known to act in the central nervous system (CNS), emphasizing, as in rare monogenic forms of obesity, the role of the CNS in predisposition to obesity.
Resumo:
Automated border control (ABC) is concerned with fast and secure processing for intelligence-led identification. The FastPass project aims to build a harmonised, modular reference system for future European ABC. When biometrics is taken on board as identity, spoofing attacks become a concern. This paper presents current research in algorithm development for counter-spoofing attacks in biometrics. Focussing on three biometric traits, face, fingerprint, and iris, it examines possible types of spoofing attacks, and reviews existing algorithms reported in relevant academic papers in the area of countering measures to biometric spoofing attacks. It indicates that the new developing trend is fusion of multiple biometrics against spoofing attacks.