961 resultados para vegetation classification
Resumo:
Koppen climate classification was applied to the output of atmospheric general circulation models and coupled atmosphere-ocean circulation models. The classification was used to validate model control runs of the present climate and to analyse greenhouse gas warming simulations The most prominent results of the global warming con~putationsw ere a retreat of regions of permafrost and the increase of areas with tropical rainy climates and dry climates.
Resumo:
We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover, composition and 5 height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, 10 and are compared to scores based on the temporal or spatial mean value of the observations and a “random” model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM), and the Lund-Potsdam-Jena (LPJ) and Land Processes and eXchanges (LPX) dynamic global 15 vegetation models (DGVMs). SDBM reproduces observed CO2 seasonal cycles, but its simulation of independent measurements of net primary production (NPP) is too high. The two DGVMs show little difference for most benchmarks (including the interannual variability in the growth rate and seasonal cycle of atmospheric CO2), but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified 20 several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change 25 impacts and feedbacks.
Resumo:
Atmospheric CO2 concentration is hypothesized to influence vegetation distribution via tree–grass competition, with higher CO2 concentrations favouring trees. The stable carbon isotope (δ13C) signature of vegetation is influenced by the relative importance of C4 plants (including most tropical grasses) and C3 plants (including nearly all trees), and the degree of stomatal closure – a response to aridity – in C3 plants. Compound-specific δ13C analyses of leaf-wax biomarkers in sediment cores of an offshore South Atlantic transect are used here as a record of vegetation changes in subequatorial Africa. These data suggest a large increase in C3 relative to C4 plant dominance after the Last Glacial Maximum. Using a process-based biogeography model that explicitly simulates 13C discrimination, it is shown that precipitation and temperature changes cannot explain the observed shift in δ13C values. The physiological effect of increasing CO2 concentration is decisive, altering the C3/C4 balance and bringing the simulated and observed δ13C values into line. It is concluded that CO2 concentration itself was a key agent of vegetation change in tropical southern Africa during the last glacial–interglacial transition. Two additional inferences follow. First, long-term variations in terrestrial δ13Cvalues are not simply a proxy for regional rainfall, as has sometimes been assumed. Although precipitation and temperature changes have had major effects on vegetation in many regions of the world during the period between the Last Glacial Maximum and recent times, CO2 effects must also be taken into account, especially when reconstructing changes in climate between glacial and interglacial states. Second, rising CO2 concentration today is likely to be influencing tree–grass competition in a similar way, and thus contributing to the "woody thickening" observed in savannas worldwide. This second inference points to the importance of experiments to determine how vegetation composition in savannas is likely to be influenced by the continuing rise of CO2 concentration.
Resumo:
Abstract Background: The analysis of the Auditory Brainstem Response (ABR) is of fundamental importance to the investigation of the auditory system behaviour, though its interpretation has a subjective nature because of the manual process employed in its study and the clinical experience required for its analysis. When analysing the ABR, clinicians are often interested in the identification of ABR signal components referred to as Jewett waves. In particular, the detection and study of the time when these waves occur (i.e., the wave latency) is a practical tool for the diagnosis of disorders affecting the auditory system. Significant differences in inter-examiner results may lead to completely distinct clinical interpretations of the state of the auditory system. In this context, the aim of this research was to evaluate the inter-examiner agreement and variability in the manual classification of ABR. Methods: A total of 160 ABR data samples were collected, for four different stimulus intensity (80dBHL, 60dBHL, 40dBHL and 20dBHL), from 10 normal-hearing subjects (5 men and 5 women, from 20 to 52 years). Four examiners with expertise in the manual classification of ABR components participated in the study. The Bland-Altman statistical method was employed for the assessment of inter-examiner agreement and variability. The mean, standard deviation and error for the bias, which is the difference between examiners’ annotations, were estimated for each pair of examiners. Scatter plots and histograms were employed for data visualization and analysis. Results: In most comparisons the differences between examiner’s annotations were below 0.1 ms, which is clinically acceptable. In four cases, it was found a large error and standard deviation (>0.1 ms) that indicate the presence of outliers and thus, discrepancies between examiners. Conclusions: Our results quantify the inter-examiner agreement and variability of the manual analysis of ABR data, and they also allows for the determination of different patterns of manual ABR analysis.
Resumo:
This work proposes a unified neurofuzzy modelling scheme. To begin with, the initial fuzzy base construction method is based on fuzzy clustering utilising a Gaussian mixture model (GMM) combined with the analysis of covariance (ANOVA) decomposition in order to obtain more compact univariate and bivariate membership functions over the subspaces of the input features. The mean and covariance of the Gaussian membership functions are found by the expectation maximisation (EM) algorithm with the merit of revealing the underlying density distribution of system inputs. The resultant set of membership functions forms the basis of the generalised fuzzy model (GFM) inference engine. The model structure and parameters of this neurofuzzy model are identified via the supervised subspace orthogonal least square (OLS) learning. Finally, instead of providing deterministic class label as model output by convention, a logistic regression model is applied to present the classifier’s output, in which the sigmoid type of logistic transfer function scales the outputs of the neurofuzzy model to the class probability. Experimental validation results are presented to demonstrate the effectiveness of the proposed neurofuzzy modelling scheme.
Resumo:
In this paper, various types of fault detection methods for fuel cells are compared. For example, those that use a model based approach or a data driven approach or a combination of the two. The potential advantages and drawbacks of each method are discussed and comparisons between methods are made. In particular, classification algorithms are investigated, which separate a data set into classes or clusters based on some prior knowledge or measure of similarity. In particular, the application of classification methods to vectors of reconstructed currents by magnetic tomography or to vectors of magnetic field measurements directly is explored. Bases are simulated using the finite integration technique (FIT) and regularization techniques are employed to overcome ill-posedness. Fisher's linear discriminant is used to illustrate these concepts. Numerical experiments show that the ill-posedness of the magnetic tomography problem is a part of the classification problem on magnetic field measurements as well. This is independent of the particular working mode of the cell but influenced by the type of faulty behavior that is studied. The numerical results demonstrate the ill-posedness by the exponential decay behavior of the singular values for three examples of fault classes.
Resumo:
Background: Since their inception, Twitter and related microblogging systems have provided a rich source of information for researchers and have attracted interest in their affordances and use. Since 2009 PubMed has included 123 journal articles on medicine and Twitter, but no overview exists as to how the field uses Twitter in research. // Objective: This paper aims to identify published work relating to Twitter indexed by PubMed, and then to classify it. This classification will provide a framework in which future researchers will be able to position their work, and to provide an understanding of the current reach of research using Twitter in medical disciplines. Limiting the study to papers indexed by PubMed ensures the work provides a reproducible benchmark. // Methods: Papers, indexed by PubMed, on Twitter and related topics were identified and reviewed. The papers were then qualitatively classified based on the paper’s title and abstract to determine their focus. The work that was Twitter focused was studied in detail to determine what data, if any, it was based on, and from this a categorization of the data set size used in the studies was developed. Using open coded content analysis additional important categories were also identified, relating to the primary methodology, domain and aspect. // Results: As of 2012, PubMed comprises more than 21 million citations from biomedical literature, and from these a corpus of 134 potentially Twitter related papers were identified, eleven of which were subsequently found not to be relevant. There were no papers prior to 2009 relating to microblogging, a term first used in 2006. Of the remaining 123 papers which mentioned Twitter, thirty were focussed on Twitter (the others referring to it tangentially). The early Twitter focussed papers introduced the topic and highlighted the potential, not carrying out any form of data analysis. The majority of published papers used analytic techniques to sort through thousands, if not millions, of individual tweets, often depending on automated tools to do so. Our analysis demonstrates that researchers are starting to use knowledge discovery methods and data mining techniques to understand vast quantities of tweets: the study of Twitter is becoming quantitative research. // Conclusions: This work is to the best of our knowledge the first overview study of medical related research based on Twitter and related microblogging. We have used five dimensions to categorise published medical related research on Twitter. This classification provides a framework within which researchers studying development and use of Twitter within medical related research, and those undertaking comparative studies of research relating to Twitter in the area of medicine and beyond, can position and ground their work.
Resumo:
The bewildering complexity of cortical microcircuits at the single cell level gives rise to surprisingly robust emergent activity patterns at the level of laminar and columnar local field potentials (LFPs) in response to targeted local stimuli. Here we report the results of our multivariate data-analytic approach based on simultaneous multi-site recordings using micro-electrode-array chips for investigation of the microcircuitary of rat somatosensory (barrel) cortex. We find high repeatability of stimulus-induced responses, and typical spatial distributions of LFP responses to stimuli in supragranular, granular, and infragranular layers, where the last form a particularly distinct class. Population spikes appear to travel with about 33 cm/s from granular to infragranular layers. Responses within barrel related columns have different profiles than those in neighbouring columns to the left or interchangeably to the right. Variations between slices occur, but can be minimized by strictly obeying controlled experimental protocols. Cluster analysis on normalized recordings indicates specific spatial distributions of time series reflecting the location of sources and sinks independent of the stimulus layer. Although the precise correspondences between single cell activity and LFPs are still far from clear, a sophisticated neuroinformatics approach in combination with multi-site LFP recordings in the standardized slice preparation is suitable for comparing normal conditions to genetically or pharmacologically altered situations based on real cortical microcircuitry.
Resumo:
Major Depressive Disorder (MDD) has been associated with biased processing and abnormal regulation of negative and positive information, which may result from compromised coordinated activity of prefrontal and subcortical brain regions involved in evaluating emotional information. We tested whether patients with MDD show distributed changes in functional connectivity with a set of independently derived brain networks that have shown high correspondence with different task demands, including stimulus salience and emotional processing. We further explored if connectivity during emotional word processing related to the tendency to engage in positive or negative emotional states. In this study, 25 medication-free MDD patients without current or past comorbidity and matched controls (n=25) performed an emotional word-evaluation task during functional MRI. Using a dual regression approach, individual spatial connectivity maps representing each subject’s connectivity with each standard network were used to evaluate between-group differences and effects of positive and negative emotionality (extraversion and neuroticism, respectively, as measured with the NEO-FFI). Results showed decreased functional connectivity of the medial prefrontal cortex, ventrolateral prefrontal cortex, and ventral striatum with the fronto-opercular salience network in MDD patients compared to controls. In patients, abnormal connectivity was related to extraversion, but not neuroticism. These results confirm the hypothesis of a relative (para)limbic-cortical decoupling that may explain dysregulated affect in MDD. As connectivity of these regions with the salience network was related to extraversion, but not to general depression severity or negative emotionality, dysfunction of this network may be responsible for the failure to sustain engagement in rewarding behavior.
Resumo:
Recent research shows that speakers of languages with obligatory plural marking (English) preferentially categorize objects based on common shape, whereas speakers of nonplural-marking classifier languages (Yucatec and Japanese) preferentially categorize objects based on common material. The current study extends that investigation to the domain of bilingualism. Japanese and English monolinguals, and Japanese–English bilinguals were asked to match novel objects based on either common shape or color. Results showed that English monolinguals selected shape significantly more than Japanese monolinguals, whereas the bilinguals shifted their cognitive preferences as a function of their second language proficiency. The implications of these findings for conceptual representation and cognitive processing in bilinguals are discussed.
Resumo:
Many important drugs in the Chinese materia medica (CMM) are known to be toxic, and it has long been recognized in classical Chinese medical theory that toxicity can arise directly from the components of a single CMM or may be induced by an interaction between combined CMM. Traditional Chinese Medicine presents a unique set of pharmaceutical theories that include particular methods for processing, combining and decocting, and these techniques contribute to reducing toxicity as well as enhancing efficacy. The current classification of toxic CMM drugs, traditional methods for processing toxic CMM and the prohibited use of certain combinations, is based on traditional experience and ancient texts and monographs, but accumulating evidence increasingly supports their use to eliminate or reduce toxicity. Modern methods are now being used to evaluate the safety of CMM; however, a new system for describing the toxicity of Chinese herbal medicines may need to be established to take into account those herbs whose toxicity is delayed or otherwise hidden, and which have not been incorporated into the traditional classification. This review explains the existing classification and justifies it where appropriate, using experimental results often originally published in Chinese and previously not available outside China.
Resumo:
We present a simple theoretical land-surface classification that can be used to determine the location and temporal behavior of preferential sources of terrestrial dust emissions. The classification also provides information about the likely nature of the sediments, their erodibility and the likelihood that they will generate emissions under given conditions. The scheme is based on the dual notions of geomorphic type and connectivity between geomorphic units. We demonstrate that the scheme can be used to map potential modern-day dust sources in the Chihuahuan Desert, the Lake Eyre Basin and the Taklamakan. Through comparison with observed dust emissions, we show that the scheme provides a reasonable prediction of areas of emission in the Chihuahuan Desert and in the Lake Eyre Basin. The classification is also applied to point source data from the Western Sahara to enable comparison of the relative importance of different land surfaces for dust emissions. We indicate how the scheme could be used to provide an improved characterization of preferential dust sources in global dust-cycle models.
Resumo:
•In current models, the ecophysiological effects of CO2 create both woody thickening and terrestrial carbon uptake, as observed now, and forest cover and terrestrial carbon storage increases that took place after the last glacial maximum (LGM). Here, we aimed to assess the realism of modelled vegetation and carbon storage changes between LGM and the pre-industrial Holocene (PIH). •We applied Land Processes and eXchanges (LPX), a dynamic global vegetation model (DGVM), with lowered CO2 and LGM climate anomalies from the Palaeoclimate Modelling Intercomparison Project (PMIP II), and compared the model results with palaeodata. •Modelled global gross primary production was reduced by 27–36% and carbon storage by 550–694 Pg C compared with PIH. Comparable reductions have been estimated from stable isotopes. The modelled areal reduction of forests is broadly consistent with pollen records. Despite reduced productivity and biomass, tropical forests accounted for a greater proportion of modelled land carbon storage at LGM (28–32%) than at PIH (25%). •The agreement between palaeodata and model results for LGM is consistent with the hypothesis that the ecophysiological effects of CO2 influence tree–grass competition and vegetation productivity, and suggests that these effects are also at work today.