985 resultados para textual complexity assessment


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This review provides examples of the fact that different procedures for the measurement of atherosclerosis in mice may lead to interpretation of the extent of atherosclerosis having markedly different biological and clinical significance for humans: 1) aortic cholesterol measurement is highly sensitive for the detection of early and advanced atherosclerosis lesions, but misses the identification of the location and complexity of these lesions that are so critical for humans; 2) the histological analysis of the aortic root lesions in simvastatin-treated and control mice reveals similar lesion morphology in spite of the remarkable simvastatin-induced reduction of the aortic cholesteryl ester content; 3) in histological analyses, chemical fixation and inclusion may extract the tissue fat and also shrink and distort tissue structures. Thus, the method may be less sensitive for the detection of slight differences among the experimental groups, unless a more suitable procedure employing physical fixation with histological sample freezing using optimal cutting temperature and liquid nitrogen is employed. Thus, when measuring experimental atherosclerosis in mice, investigators should be aware of several previously unreported pitfalls regarding the extent, location and complexity of the arterial lesion that may not be suitable for extrapolation to human pathology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is generally acknowledged that population-level assessments provide,I better measure of response to toxicants than assessments of individual-level effects. population-level assessments generally require the use of models to integrate potentially complex data about the effects of toxicants on life-history traits, and to provide a relevant measure of ecological impact. Building on excellent earlier reviews we here briefly outline the modelling options in population-level risk assessment. Modelling is used to calculate population endpoints from available data, which is often about Individual life histories, the ways that individuals interact with each other, the environment and other species, and the ways individuals are affected by pesticides. As population endpoints, we recommend the use of population abundance, population growth rate, and the chance of population persistence. We recommend two types of model: simple life-history models distinguishing two life-history stages, juveniles and adults; and spatially-explicit individual-based landscape models. Life-history models are very quick to set up and run, and they provide a great deal or insight. At the other extreme, individual-based landscape models provide the greatest verisimilitude, albeit at the cost of greatly increased complexity. We conclude with a discussion of the cations of the severe problems of parameterising models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Black carbon aerosol plays a unique and important role in Earth’s climate system. Black carbon is a type of carbonaceous material with a unique combination of physical properties. This assessment provides an evaluation of black-carbon climate forcing that is comprehensive in its inclusion of all known and relevant processes and that is quantitative in providing best estimates and uncertainties of the main forcing terms: direct solar absorption; influence on liquid, mixed phase, and ice clouds; and deposition on snow and ice. These effects are calculated with climate models, but when possible, they are evaluated with both microphysical measurements and field observations. Predominant sources are combustion related, namely, fossil fuels for transportation, solid fuels for industrial and residential uses, and open burning of biomass. Total global emissions of black carbon using bottom-up inventory methods are 7500 Gg yr�-1 in the year 2000 with an uncertainty range of 2000 to 29000. However, global atmospheric absorption attributable to black carbon is too low in many models and should be increased by a factor of almost 3. After this scaling, the best estimate for the industrial-era (1750 to 2005) direct radiative forcing of atmospheric black carbon is +0.71 W m�-2 with 90% uncertainty bounds of (+0.08, +1.27)Wm�-2. Total direct forcing by all black carbon sources, without subtracting the preindustrial background, is estimated as +0.88 (+0.17, +1.48) W m�-2. Direct radiative forcing alone does not capture important rapid adjustment mechanisms. A framework is described and used for quantifying climate forcings, including rapid adjustments. The best estimate of industrial-era climate forcing of black carbon through all forcing mechanisms, including clouds and cryosphere forcing, is +1.1 W m�-2 with 90% uncertainty bounds of +0.17 to +2.1 W m�-2. Thus, there is a very high probability that black carbon emissions, independent of co-emitted species, have a positive forcing and warm the climate. We estimate that black carbon, with a total climate forcing of +1.1 W m�-2, is the second most important human emission in terms of its climate forcing in the present-day atmosphere; only carbon dioxide is estimated to have a greater forcing. Sources that emit black carbon also emit other short-lived species that may either cool or warm climate. Climate forcings from co-emitted species are estimated and used in the framework described herein. When the principal effects of short-lived co-emissions, including cooling agents such as sulfur dioxide, are included in net forcing, energy-related sources (fossil fuel and biofuel) have an industrial-era climate forcing of +0.22 (�-0.50 to +1.08) W m-�2 during the first year after emission. For a few of these sources, such as diesel engines and possibly residential biofuels, warming is strong enough that eliminating all short-lived emissions from these sources would reduce net climate forcing (i.e., produce cooling). When open burning emissions, which emit high levels of organic matter, are included in the total, the best estimate of net industrial-era climate forcing by all short-lived species from black-carbon-rich sources becomes slightly negative (�-0.06 W m�-2 with 90% uncertainty bounds of �-1.45 to +1.29 W m�-2). The uncertainties in net climate forcing from black-carbon-rich sources are substantial, largely due to lack of knowledge about cloud interactions with both black carbon and co-emitted organic carbon. In prioritizing potential black-carbon mitigation actions, non-science factors, such as technical feasibility, costs, policy design, and implementation feasibility play important roles. The major sources of black carbon are presently in different stages with regard to the feasibility for near-term mitigation. This assessment, by evaluating the large number and complexity of the associated physical and radiative processes in black-carbon climate forcing, sets a baseline from which to improve future climate forcing estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: To develop a method for objective assessment of fine motor timing variability in Parkinson’s disease (PD) patients, using digital spiral data gathered by a touch screen device. BACKGROUND: A retrospective analysis was conducted on data from 105 subjects including65 patients with advanced PD (group A), 15 intermediate patients experiencing motor fluctuations (group I), 15 early stage patients (group S), and 10 healthy elderly subjects (HE) were examined. The subjects were asked to perform repeated upper limb motor tasks by tracing a pre-drawn Archimedes spiral as shown on the screen of the device. The spiral tracing test was performed using an ergonomic pen stylus, using dominant hand. The test was repeated three times per test occasion and the subjects were instructed to complete it within 10 seconds. Digital spiral data including stylus position (x-ycoordinates) and timestamps (milliseconds) were collected and used in subsequent analysis. The total number of observations with the test battery were as follows: Swedish group (n=10079), Italian I group (n=822), Italian S group (n = 811), and HE (n=299). METHODS: The raw spiral data were processed with three data processing methods. To quantify motor timing variability during spiral drawing tasks Approximate Entropy (APEN) method was applied on digitized spiral data. APEN is designed to capture the amount of irregularity or complexity in time series. APEN requires determination of two parameters, namely, the window size and similarity measure. In our work and after experimentation, window size was set to 4 and similarity measure to 0.2 (20% of the standard deviation of the time series). The final score obtained by APEN was normalized by total drawing completion time and used in subsequent analysis. The score generated by this method is hence on denoted APEN. In addition, two more methods were applied on digital spiral data and their scores were used in subsequent analysis. The first method was based on Digital Wavelet Transform and Principal Component Analysis and generated a score representing spiral drawing impairment. The score generated by this method is hence on denoted WAV. The second method was based on standard deviation of frequency filtered drawing velocity. The score generated by this method is hence on denoted SDDV. Linear mixed-effects (LME) models were used to evaluate mean differences of the spiral scores of the three methods across the four subject groups. Test-retest reliability of the three scores was assessed after taking mean of the three possible correlations (Spearman’s rank coefficients) between the three test trials. Internal consistency of the methods was assessed by calculating correlations between their scores. RESULTS: When comparing mean spiral scores between the four subject groups, the APEN scores were different between HE subjects and three patient groups (P=0.626 for S group with 9.9% mean value difference, P=0.089 for I group with 30.2%, and P=0.0019 for A group with 44.1%). However, there were no significant differences in mean scores of the other two methods, except for the WAV between the HE and A groups (P<0.001). WAV and SDDV were highly and significantly correlated to each other with a coefficient of 0.69. However, APEN was not correlated to neither WAV nor SDDV with coefficients of 0.11 and 0.12, respectively. Test-retest reliability coefficients of the three scores were as follows: APEN (0.9), WAV(0.83) and SD-DV (0.55). CONCLUSIONS: The results show that the digital spiral analysis-based objective APEN measure is able to significantly differentiate the healthy subjects from patients at advanced level. In contrast to the other two methods (WAV and SDDV) that are designed to quantify dyskinesias (over-medications), this method can be useful for characterizing Off symptoms in PD. The APEN was not correlated to none of the other two methods indicating that it measures a different construct of upper limb motor function in PD patients than WAV and SDDV. The APEN also had a better test-retest reliability indicating that it is more stable and consistent over time than WAV and SDDV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid growth of urban areas has a significant impact on traffic and transportation systems. New management policies and planning strategies are clearly necessary to cope with the more than ever limited capacity of existing road networks. The concept of Intelligent Transportation System (ITS) arises in this scenario; rather than attempting to increase road capacity by means of physical modifications to the infrastructure, the premise of ITS relies on the use of advanced communication and computer technologies to handle today’s traffic and transportation facilities. Influencing users’ behaviour patterns is a challenge that has stimulated much research in the ITS field, where human factors start gaining great importance to modelling, simulating, and assessing such an innovative approach. This work is aimed at using Multi-agent Systems (MAS) to represent the traffic and transportation systems in the light of the new performance measures brought about by ITS technologies. Agent features have good potentialities to represent those components of a system that are geographically and functionally distributed, such as most components in traffic and transportation. A BDI (beliefs, desires, and intentions) architecture is presented as an alternative to traditional models used to represent the driver behaviour within microscopic simulation allowing for an explicit representation of users’ mental states. Basic concepts of ITS and MAS are presented, as well as some application examples related to the subject. This has motivated the extension of an existing microscopic simulation framework to incorporate MAS features to enhance the representation of drivers. This way demand is generated from a population of agents as the result of their decisions on route and departure time, on a daily basis. The extended simulation model that now supports the interaction of BDI driver agents was effectively implemented, and different experiments were performed to test this approach in commuter scenarios. MAS provides a process-driven approach that fosters the easy construction of modular, robust, and scalable models, characteristics that lack in former result-driven approaches. Its abstraction premises allow for a closer association between the model and its practical implementation. Uncertainty and variability are addressed in a straightforward manner, as an easier representation of humanlike behaviours within the driver structure is provided by cognitive architectures, such as the BDI approach used in this work. This way MAS extends microscopic simulation of traffic to better address the complexity inherent in ITS technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This incidence of postoperative sensitivity was evaluated in resin-based posterior restorations. Two hundred and ninety-two direct restorations were evaluated in premolars and molars. A total of 143 Class I and 149 Class 11 restorations (MO/OD and MOD) were placed in patients ranging in age from 30 to 50 years. After the cavity preparations were completed, a rubber dam was placed, and the preparations were restored using a total-etch system (Prime & Bond NT) and a resin-based restorative material (TPH Spectrum). The patients were contacted after 24 hours and 7, 30 and 90 days postoperatively and questioned regarding the presence of sensitivity and the stimuli that triggered that sensitivity. The Chi-square and Fisher's Exact Test were used for statistical analysis. Evaluation at 24 hours after restorative treatment revealed statistically significant differences among the types of cavity preparations restored and the occurrence of postoperative sensitivity (p=0.0003), with a higher frequency of sensitivity in Class H MOD restorations (26%), followed by Class II MO/DO (15%) and Class I restorations (5%). At 7, 30 and 90 days after restorative treatment, there was a decrease in the occurrence of sensitivity for all groups. The percentage of sensitivity among the groups was not significantly different. This study shows that the occurrence of sensitivity is correlated with the complexity of the restoration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Os temas socioambientais têm se consolidado importantes fontes de investigação de estudiosos das mais variadas áreas. Por sua vez, as questões socioambientais se constituem foco de preocupação constante da contemporaneidade, inclusive para o ensino de Ciências. Ademais, o ensino de Ciências tem se pautado na disposição de contribuir para uma percepção mais adequada dos problemas socioambientais, além de favorecer a formação de um cidadão crítico e autônomo, capaz de compreender a complexidade do mundo natural e social, aproximando estes dois campos. Nesse contexto, buscamos apreender as leituras dos alunos de Licenciatura (Biologia e Física) sobre questões socioambientais vigentes na região metropolitana de Belém, a partir da fotografia, e suas possibilidades de uso no ensino de Ciências na percepção desses alunos. Optamos pela abordagem qualitativa e como estratégia metodológica, utilizamos a pesquisa-ação. A pesquisa ocorreu durante a realização da oficina “A Fotografia no Ensino de Ciências”, estratégia de recolha de dados, no período de seis dias, junto a 10 alunos. A fotografia configura-se como instrumento facilitador da apreensão dos aspectos sociais, econômicos, ambientais, políticos, educacionais, entre outros, que permeiam a leitura do ambiente, ou seja, favorece leituras ampliadas (multidimensionais) do contexto socioambiental evidenciado/vivido. Os dados relativos aos conhecimentos, aos entendimentos e as interpretações, entre outros aspectos, dos alunos, foram organizados e analisados mediante análise textual discursiva. Escolhemos produzir metatextos referentes à apreensão das questões socioambientais, a partir da análise dos textos das fotografias. Ao se lançarem na busca das questões atinentes aos problemas socioambientais, os discentes “(re)direcionaram” a presente pesquisa, ou seja, em nossa avaliação alcançou um nível “para além do esperado”, do trivial; em sua edificação, o corriqueiro estagnou-se. Isto porque, durante as construções analíticas dos alunos, observamos a extrapolação dessas questões, para outros campos do conhecimento (social, econômico, entre outros).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Developmental Coordination Disorder (DCD), a chronic and usually permanent condition found in children, is characterized by motor impairment that interferes with a child's activities of daily living and with academic achievement. One of the most popular tests for the quantitative diagnosis of DCD is the Movement Assessment Battery for Children (MABC). Based on the Battery's standardized scores, it is possible to identify children with typical development, children at risk of developing DCD, and children with DCD. This article describes a computational system we developed to assist with the analysis of results obtained in the MABC test. The tool was developed for the web environment and its database provides integration of MABC data. Thus, researchers around the world can share data and develop collaborative work in the DCD field. In order to help analysis processes, our system provides services for filtering data to show more specific sets of information and present the results in textual, table, and graphic formats, allowing easier and more comprehensive evaluation of the results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: In normal aging, the decrease in the syntactic complexity of written production is usually associated with cognitive deficits. This study was aimed to analyze the quality of older adults' textual production indicated by verbal fluency (number of words) and grammatical complexity (number of ideas) in relation to gender, age, schooling, and cognitive status. Methods: From a probabilistic sample of community-dwelling people aged 65 years and above (n = 900), 577 were selected on basis of their responses to the Mini-Mental State Examination (MMSE) sentence writing, which were submitted to content analysis; 323 were excluded as they left the item blank or performed illegible or not meaningful responses. Education adjusted cut-off scores for the MMSE were used to classify the participants as cognitively impaired or unimpaired. Total and subdomain MMSE scores were computed. Results: 40.56% of participants whose answers to the MMSE sentence were excluded from the analyses had cognitive impairment compared to 13.86% among those whose answers were included. The excluded participants were older and less educated. Women and those older than 80 years had the lowest scores in the MMSE. There was no statistically significant relationship between gender, age, schooling, and textual performance. There was a modest but significant correlation between number of words written and the scores in the Language subdomain. Conclusions: Results suggest the strong influence of schooling and age over MMSE sentence performance. Failing to write a sentence may suggest cognitive impairment, yet, instructions for the MMSE sentence, i.e. to produce a simple sentence, may limit its clinical interpretation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nuclear Magnetic Resonance (NMR) is a branch of spectroscopy that is based on the fact that many atomic nuclei may be oriented by a strong magnetic field and will absorb radiofrequency radiation at characteristic frequencies. The parameters that can be measured on the resulting spectral lines (line positions, intensities, line widths, multiplicities and transients in time-dependent experi-ments) can be interpreted in terms of molecular structure, conformation, molecular motion and other rate processes. In this way, high resolution (HR) NMR allows performing qualitative and quantitative analysis of samples in solution, in order to determine the structure of molecules in solution and not only. In the past, high-field NMR spectroscopy has mainly concerned with the elucidation of chemical structure in solution, but today is emerging as a powerful exploratory tool for probing biochemical and physical processes. It represents a versatile tool for the analysis of foods. In literature many NMR studies have been reported on different type of food such as wine, olive oil, coffee, fruit juices, milk, meat, egg, starch granules, flour, etc using different NMR techniques. Traditionally, univariate analytical methods have been used to ex-plore spectroscopic data. This method is useful to measure or to se-lect a single descriptive variable from the whole spectrum and , at the end, only this variable is analyzed. This univariate methods ap-proach, applied to HR-NMR data, lead to different problems due especially to the complexity of an NMR spectrum. In fact, the lat-ter is composed of different signals belonging to different mole-cules, but it is also true that the same molecules can be represented by different signals, generally strongly correlated. The univariate methods, in this case, takes in account only one or a few variables, causing a loss of information. Thus, when dealing with complex samples like foodstuff, univariate analysis of spectra data results not enough powerful. Spectra need to be considered in their wholeness and, for analysing them, it must be taken in consideration the whole data matrix: chemometric methods are designed to treat such multivariate data. Multivariate data analysis is used for a number of distinct, differ-ent purposes and the aims can be divided into three main groups: • data description (explorative data structure modelling of any ge-neric n-dimensional data matrix, PCA for example); • regression and prediction (PLS); • classification and prediction of class belongings for new samples (LDA and PLS-DA and ECVA). The aim of this PhD thesis was to verify the possibility of identify-ing and classifying plants or foodstuffs, in different classes, based on the concerted variation in metabolite levels, detected by NMR spectra and using the multivariate data analysis as a tool to inter-pret NMR information. It is important to underline that the results obtained are useful to point out the metabolic consequences of a specific modification on foodstuffs, avoiding the use of a targeted analysis for the different metabolites. The data analysis is performed by applying chemomet-ric multivariate techniques to the NMR dataset of spectra acquired. The research work presented in this thesis is the result of a three years PhD study. This thesis reports the main results obtained from these two main activities: A1) Evaluation of a data pre-processing system in order to mini-mize unwanted sources of variations, due to different instrumental set up, manual spectra processing and to sample preparations arte-facts; A2) Application of multivariate chemiometric models in data analy-sis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the last decade, a multi-modal approach has been established in human experimental pain research for assessing pain thresholds and responses to various experimental pain modalities. Studies have concluded that differences in responses to pain stimuli are mainly related to variation between individuals rather than variation in response to different stimulus modalities. In a factor analysis of 272 consecutive volunteers (137 men and 135 women) who underwent tests with different experimental pain modalities, it was determined whether responses to different pain modalities represent distinct individual uncorrelated dimensions of pain perception. Volunteers underwent single painful electrical stimulation, repeated painful electrical stimulation (temporal summation), test for reflex receptive field, pressure pain stimulation, heat pain stimulation, cold pain stimulation, and a cold pressor test (ice water test). Five distinct factors were found representing responses to 5 distinct experimental pain modalities: pressure, heat, cold, electrical stimulation, and reflex-receptive fields. Each of the factors explained approximately 8% to 35% of the observed variance, and the 5 factors cumulatively explained 94% of the variance. The correlation between the 5 factors was near null (median ρ=0.00, range -0.03 to 0.05), with 95% confidence intervals for pairwise correlations between 2 factors excluding any relevant correlation. Results were almost similar for analyses stratified according to gender and age. Responses to different experimental pain modalities represent different specific dimensions and should be assessed in combination in future pharmacological and clinical studies to represent the complexity of nociception and pain experience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Landscape structure and heterogeneity play a potentially important, but little understood role in predator-prey interactions and behaviourally-mediated habitat selection. For example, habitat complexity may either reduce or enhance the efficiency of a predator's efforts to search, track, capture, kill and consume prey. For prey, structural heterogeneity may affect predator detection, avoidance and defense, escape tactics, and the ability to exploit refuges. This study, investigates whether and how vegetation and topographic structure influence the spatial patterns and distribution of moose (Alces alces) mortality due to predation and malnutrition at the local and landscape levels on Isle Royale National Park. 230 locations where wolves (Canis lupus) killed moose during the winters between 2002 and 2010, and 182 moose starvation death sites for the period 1996-2010, were selected from the extensive Isle Royale Wolf-Moose Project carcass database. A variety of LiDAR-derived metrics were generated and used in an algorithm model (Random Forest) to identify, characterize, and classify three-dimensional variables significant to each of the mortality classes. Furthermore, spatial models to predict and assess the likelihood at the landscape scale of moose mortality were developed. This research found that the patterns of moose mortality by predation and malnutrition across the landscape are non-random, have a high degree of spatial variability, and that both mechanisms operate in contexts of comparable physiographic and vegetation structure. Wolf winter hunting locations on Isle Royale are more likely to be a result of its prey habitat selection, although they seem to prioritize the overall areas with higher moose density in the winter. Furthermore, the findings suggest that the distribution of moose mortality by predation is habitat-specific to moose, and not to wolves. In addition, moose sex, age, and health condition also affect mortality site selection, as revealed by subtle differences between sites in vegetation heights, vegetation density, and topography. Vegetation density in particular appears to differentiate mortality locations for distinct classes of moose. The results also emphasize the significance of fine-scale landscape and habitat features when addressing predator-prey interactions. These finer scale findings would be easily missed if analyses were limited to the broader landscape scale alone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Application of biogeochemical models to the study of marine ecosystems is pervasive, yet objective quantification of these models' performance is rare. Here, 12 lower trophic level models of varying complexity are objectively assessed in two distinct regions (equatorial Pacific and Arabian Sea). Each model was run within an identical one-dimensional physical framework. A consistent variational adjoint implementation assimilating chlorophyll-a, nitrate, export, and primary productivity was applied and the same metrics were used to assess model skill. Experiments were performed in which data were assimilated from each site individually and from both sites simultaneously. A cross-validation experiment was also conducted whereby data were assimilated from one site and the resulting optimal parameters were used to generate a simulation for the second site. When a single pelagic regime is considered, the simplest models fit the data as well as those with multiple phytoplankton functional groups. However, those with multiple phytoplankton functional groups produced lower misfits when the models are required to simulate both regimes using identical parameter values. The cross-validation experiments revealed that as long as only a few key biogeochemical parameters were optimized, the models with greater phytoplankton complexity were generally more portable. Furthermore, models with multiple zooplankton compartments did not necessarily outperform models with single zooplankton compartments, even when zooplankton biomass data are assimilated. Finally, even when different models produced similar least squares model-data misfits, they often did so via very different element flow pathways, highlighting the need for more comprehensive data sets that uniquely constrain these pathways.