865 resultados para multi-factor models
Resumo:
Activities like the Coupled Model Intercomparison Project (CMIP) have revolutionized climate modelling in terms of our ability to compare models and to process information about climate projections and their uncertainties. The evaluation of models against observations is now considered a key component of multi-model studies. While there are a number of outstanding scientific issues surrounding model evaluation, notably the open question of how to link model performance to future projections, here we highlight a specific but growing problem in model evaluation - that of uncertainties in the observational data that are used to evaluate the models. We highlight the problem using an example obtained from studies of the South Asian Monsoon but we believe the problem is a generic one which arises in many different areas of climate model evaluation and which requires some attention by the community.
Resumo:
It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer’s prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene point is treated independently in the reconstruction; in the other, the pertinent variable is the spatial relationship between pairs of points. Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back. Error distributions varied substantially with changes in scene layout; we compared these directly with the likelihood maps to quantify the success of the models. We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models. Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision.
Resumo:
Fresh water hosing simulations, in which a fresh water flux is imposed in the North Atlantic to force fluctuations of the Atlantic Meridional Overturning Circulation, have been routinely performed, first to study the climatic signature of different states of this circulation, then, under present or future conditions, to investigate the potential impact of a partial melting of the Greenland ice sheet. The most compelling examples of climatic changes potentially related to AMOC abrupt variations, however, are found in high resolution palaeo-records from around the globe for the last glacial period. To study those more specifically, more and more fresh water hosing experiments have been performed under glacial conditions in the recent years. Here we compare an ensemble constituted by 11 such simulations run with 6 different climate models. All simulations follow a slightly different design, but are sufficiently close in their design to be compared. They all study the impact of a fresh water hosing imposed in the extra-tropical North Atlantic. Common features in the model responses to hosing are the cooling over the North Atlantic, extending along the sub-tropical gyre in the tropical North Atlantic, the southward shift of the Atlantic ITCZ and the weakening of the African and Indian monsoons. On the other hand, the expression of the bipolar see-saw, i.e., warming in the Southern Hemisphere, differs from model to model, with some restricting it to the South Atlantic and specific regions of the southern ocean while others simulate a widespread southern ocean warming. The relationships between the features common to most models, i.e., climate changes over the north and tropical Atlantic, African and Asian monsoon regions, are further quantified. These suggest a tight correlation between the temperature and precipitation changes over the extra-tropical North Atlantic, but different pathways for the teleconnections between the AMOC/North Atlantic region and the African and Indian monsoon regions.
Resumo:
An extensive off-line evaluation of the Noah/Single Layer Urban Canopy Model (Noah/SLUCM) urban land-surface model is presented using data from 15 sites to assess (1) the ability of the scheme to reproduce the surface energy balance observed in a range of urban environments, including seasonal changes, and (2) the impact of increasing complexity of input parameter information. Model performance is found to be most dependent on representation of vegetated surface area cover; refinement of other parameter values leads to smaller improvements. Model biases in net all-wave radiation and trade-offs between turbulent heat fluxes are highlighted using an optimization algorithm. Here we use the Urban Zones to characterize Energy partitioning (UZE) as the basis to assign default SLUCM parameter values. A methodology (FRAISE) to assign sites (or areas) to one of these categories based on surface characteristics is evaluated. Using three urban sites from the Basel Urban Boundary Layer Experiment (BUBBLE) dataset, an independent evaluation of the model performance with the parameter values representative of each class is performed. The scheme copes well with both seasonal changes in the surface characteristics and intra-urban heterogeneities in energy flux partitioning, with RMSE performance comparable to similar state-of-the-art models for all fluxes, sites and seasons. The potential of the methodology for high-resolution atmospheric modelling application using the Weather Research and Forecasting (WRF) model is highlighted. This analysis supports the recommendations that (1) three classes are appropriate to characterize the urban environment, and (2) that the parameter values identified should be adopted as default values in WRF.
Resumo:
AIMS/HYPOTHESIS: The PPARGC1A gene coactivates multiple nuclear transcription factors involved in cellular energy metabolism and vascular stasis. In the present study, we genotyped 35 tagging polymorphisms to capture all common PPARGC1A nucleotide sequence variations and tested for association with metabolic and cardiovascular traits in 2,101 Danish and Estonian boys and girls from the European Youth Heart Study, a multicentre school-based cross-sectional cohort study. METHODS: Fasting plasma glucose concentrations, anthropometric variables and blood pressure were measured. Habitual physical activity and aerobic fitness were objectively assessed using uniaxial accelerometry and a maximal aerobic exercise stress test on a bicycle ergometer, respectively. RESULTS: In adjusted models, nominally significant associations were observed for BMI (rs10018239, p = 0.039), waist circumference (rs7656250, p = 0.012; rs8192678 [Gly482Ser], p = 0.015; rs3755863, p = 0.02; rs10018239, beta = -0.01 cm per minor allele copy, p = 0.043), systolic blood pressure (rs2970869, p = 0.018) and fasting glucose concentrations (rs11724368, p = 0.045). Stronger associations were observed for aerobic fitness (rs7656250, p = 0.005; rs13117172, p = 0.008) and fasting glucose concentrations (rs7657071, p = 0.002). None remained significant after correcting for the number of statistical comparisons. We proceeded by testing for gene x physical activity interactions for the polymorphisms that showed nominal evidence of association in the main effect models. None of these tests was statistically significant. CONCLUSIONS/INTERPRETATION: Variants at PPARGC1A may influence several metabolic traits in this European paediatric cohort. However, variation at PPARGC1A is unlikely to have a major impact on cardiovascular or metabolic health in these children.
Resumo:
HD (Huntington's disease) is a late onset heritable neurodegenerative disorder that is characterized by neuronal dysfunction and death, particularly in the cerebral cortex and medium spiny neurons of the striatum. This is followed by progressive chorea, dementia and emotional dysfunction, eventually resulting in death. HD is caused by an expanded CAG repeat in the first exon of the HD gene that results in an abnormally elongated polyQ (polyglutamine) tract in its protein product, Htt (Huntingtin). Wild-type Htt is largely cytoplasmic; however, in HD, proteolytic N-terminal fragments of Htt form insoluble deposits in both the cytoplasm and nucleus, provoking the idea that mutHtt (mutant Htt) causes transcriptional dysfunction. While a number of specific transcription factors and co-factors have been proposed as mediators of mutHtt toxicity, the causal relationship between these Htt/transcription factor interactions and HD pathology remains unknown. Previous work has highlighted REST [RE1 (repressor element 1)-silencing transcription factor] as one such transcription factor. REST is a master regulator of neuronal genes, repressing their expression. Many of its direct target genes are known or suspected to have a role in HD pathogenesis, including BDNF (brain-derived neurotrophic factor). Recent evidence has also shown that REST regulates transcription of regulatory miRNAs (microRNAs), many of which are known to regulate neuronal gene expression and are dysregulated in HD. Thus repression of miRNAs constitutes a second, indirect mechanism by which REST can alter the neuronal transcriptome in HD. We will describe the evidence that disruption to the REST regulon brought about by a loss of interaction between REST and mutHtt may be a key contributory factor in the widespread dysregulation of gene expression in HD.
Resumo:
Methods of improving the coverage of Box–Jenkins prediction intervals for linear autoregressive models are explored. These methods use bootstrap techniques to allow for parameter estimation uncertainty and to reduce the small-sample bias in the estimator of the models’ parameters. In addition, we also consider a method of bias-correcting the non-linear functions of the parameter estimates that are used to generate conditional multi-step predictions.
Resumo:
We investigate the role of the ocean feedback on the climate in response to insolation forcing during the mid-Holocene (6,000 year BP) using results from seven coupled ocean–atmosphere general circulation models. We examine how the dipole in late summer sea-surface temperature (SST) anomalies in the tropical Atlantic increases the length of the African monsoon, how this dipole structure is created and maintained, and how the late summer SST warming in the northwest Indian Ocean affects the monsoon retreat in this sector. Similar mechanisms are found in all of the models, including a strong wind evaporation feedback and changes in the mixed layer depth that enhance the insolation forcing, as well as increased Ekman transport in the Atlantic that sharpens the Atlantic dipole pattern. We also consider changes in interannual variability over West Africa and the Indian Ocean. The teleconnection between variations in SST and Sahelian precipitation favor a larger impact of the Atlantic dipole mode in this region. In the Indian Ocean, the strengthening of the Indian dipole structure in autumn has a damping effect on the Indian dipole mode at the interannual time scale
Resumo:
Predictability of the western North Pacific (WNP) summer climate associated with different El Niño–Southern Oscillation (ENSO) phases is investigated in this study based on the 1-month lead retrospective forecasts of five state-of-the-art coupled models from ENSEMBLES. During the period from 1960 to 2005, the models well capture the WNP summer climate anomalies during most of years in different ENSO phases except the La Niña decaying summers. In the El Niño developing, El Niño decaying and La Niña developing summers, the prediction skills are high for the WNP summer monsoon index (WNPMI), with the prediction correlation larger than 0.7. The high prediction skills of the lower-tropospheric circulation during these phases are found mainly over the tropical western Pacific Ocean, South China Sea and subtropical WNP. These good predictions correspond well to their close teleconnection with ENSO and the high prediction skills of tropical SSTs. By contrast, for the La Niña decaying summers, the prediction skills are considerably low with the prediction correlation for the WNPMI near to zero and low prediction skills around the Philippines and subtropical WNP. These poor predictions relate to the weak summer anomalies of the WNPMI during the La Niña decaying years and no significant connections between the WNP lower-tropospheric circulation anomalies and the SSTs over the tropical central and eastern Pacific Ocean in observations. However, the models tend to predict an apparent anomalous cyclone over the WNP during the La Niña decaying years, indicating a linearity of the circulation response over WNP in the models prediction in comparison with that during the El Niño decaying years which differs from observations. In addition, the models show considerable capability in describing the WNP summer anomalies during the ENSO neutral summers. These anomalies are related to the positive feedback between the WNP lower-tropospheric circulation and the local SSTs. The models can capture this positive feedback but with some uncertainties from different ensemble members during the ENSO neutral summers.
Resumo:
The overall global-scale consequences of climate change are dependent on the distribution of impacts across regions, and there are multiple dimensions to these impacts.This paper presents a global assessment of the potential impacts of climate change across several sectors, using a harmonised set of impacts models forced by the same climate and socio-economic scenarios. Indicators of impact cover the water resources, river and coastal flooding, agriculture, natural environment and built environment sectors. Impacts are assessed under four SRES socio-economic and emissions scenarios, and the effects of uncertainty in the projected pattern of climate change are incorporated by constructing climate scenarios from 21 global climate models. There is considerable uncertainty in projected regional impacts across the climate model scenarios, and coherent assessments of impacts across sectors and regions therefore must be based on each model pattern separately; using ensemble means, for example, reduces variability between sectors and indicators. An example narrative assessment is presented in the paper. Under this narrative approximately 1 billion people would be exposed to increased water resources stress, around 450 million people exposed to increased river flooding, and 1.3 million extra people would be flooded in coastal floods each year. Crop productivity would fall in most regions, and residential energy demands would be reduced in most regions because reduced heating demands would offset higher cooling demands. Most of the global impacts on water stress and flooding would be in Asia, but the proportional impacts in the Middle East North Africa region would be larger. By 2050 there are emerging differences in impact between different emissions and socio-economic scenarios even though the changes in temperature and sea level are similar, and these differences are greater in 2080. However, for all the indicators, the range in projected impacts between different climate models is considerably greater than the range between emissions and socio-economic scenarios.
Resumo:
Many of the next generation of global climate models will include aerosol schemes which explicitly simulate the microphysical processes that determine the particle size distribution. These models enable aerosol optical properties and cloud condensation nuclei (CCN) concentrations to be determined by fundamental aerosol processes, which should lead to a more physically based simulation of aerosol direct and indirect radiative forcings. This study examines the global variation in particle size distribution simulated by 12 global aerosol microphysics models to quantify model diversity and to identify any common biases against observations. Evaluation against size distribution measurements from a new European network of aerosol supersites shows that the mean model agrees quite well with the observations at many sites on the annual mean, but there are some seasonal biases common to many sites. In particular, at many of these European sites, the accumulation mode number concentration is biased low during winter and Aitken mode concentrations tend to be overestimated in winter and underestimated in summer. At high northern latitudes, the models strongly underpredict Aitken and accumulation particle concentrations compared to the measurements, consistent with previous studies that have highlighted the poor performance of global aerosol models in the Arctic. In the marine boundary layer, the models capture the observed meridional variation in the size distribution, which is dominated by the Aitken mode at high latitudes, with an increasing concentration of accumulation particles with decreasing latitude. Considering vertical profiles, the models reproduce the observed peak in total particle concentrations in the upper troposphere due to new particle formation, although modelled peak concentrations tend to be biased high over Europe. Overall, the multi-model-mean data set simulates the global variation of the particle size distribution with a good degree of skill, suggesting that most of the individual global aerosol microphysics models are performing well, although the large model diversity indicates that some models are in poor agreement with the observations. Further work is required to better constrain size-resolved primary and secondary particle number sources, and an improved understanding of nucleation and growth (e.g. the role of nitrate and secondary organics) will improve the fidelity of simulated particle size distributions.
Resumo:
Accurate estimates of how soil water stress affects plant transpiration are crucial for reliable land surface model (LSM) predictions. Current LSMs generally use a water stress factor, β, dependent on soil moisture content, θ, that ranges linearly between β = 1 for unstressed vegetation and β = 0 when wilting point is reached. This paper explores the feasibility of replacing the current approach with equations that use soil water potential as their independent variable, or with a set of equations that involve hydraulic and chemical signaling, thereby ensuring feedbacks between the entire soil–root–xylem–leaf system. A comparison with the original linear θ-based water stress parameterization, and with its improved curvi-linear version, was conducted. Assessment of model suitability was focused on their ability to simulate the correct (as derived from experimental data) curve shape of relative transpiration versus fraction of transpirable soil water. We used model sensitivity analyses under progressive soil drying conditions, employing two commonly used approaches to calculate water retention and hydraulic conductivity curves. Furthermore, for each of these hydraulic parameterizations we used two different parameter sets, for 3 soil texture types; a total of 12 soil hydraulic permutations. Results showed that the resulting transpiration reduction functions (TRFs) varied considerably among the models. The fact that soil hydraulic conductivity played a major role in the model that involved hydraulic and chemical signaling led to unrealistic values of β, and hence TRF, for many soil hydraulic parameter sets. However, this model is much better equipped to simulate the behavior of different plant species. Based on these findings, we only recommend implementation of this approach into LSMs if great care with choice of soil hydraulic parameters is taken
Resumo:
The approach of reaggregation involves the regeneration and self-renewal of histotypical 3D spheres from isolated tissue kept in suspension culture. Reaggregated spheres can be used as tumour, genetic, biohybrid and neurosphere models. In addition the functional superiority of 3D aggregates over conventional 2D cultures developed the use of neurospheres for brain engineering of CNS diseases. Thus 3D aggregate cultures created enormous interest in mechanisms that regulate the formation of multicellular aggregates in vitro. Here we analyzed mechanisms guiding the development of 3D neurosphere cultures. Adult neural stem cells can be cultured as self-adherent clusters, called neurospheres. Neurospheres are characterised as heterogeneous clusters containing unequal stem cell sub-types. Tumour necrosis factor-alpha (TNF-alpha is one of the crucial inflammatory cytokines with multiple actions on several cell types. TNF-alpha strongly activates the canonical Nuclear Factor Kappa-B (NF- kappaB) pathway. In order to investigate further functions of TNF in neural stem cells (NSCs) we tested the hypothesis that TNF is able to modulate the motility and/or migratory behaviour of SVZ derived adult neural stem cells. We observed a significantly faster sphere formation in TNF treated cultures than in untreated controls. The very fast aggregation of isolated NSCs (<2h) is a commonly observed phenomenon, though the mechanisms of 3D neurosphere formation remain largely unclear. Here we demonstrate for the first time, increased aggregation and enhanced motility of isolated NSCs in response to the TNF-stimulus. Moreover, this phenomenon is largely dependent on activated transcription factor NF-kappaB. Both, the pharmacological blockade of NF-kappaB pathway by pyrrolidine dithiocarbamate (PDTC) or Bay11-7082 and genetic blockade by expression of a transdominant-negative super-repressor IkappaB-AA1 led to decreased aggregation.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
Multi-model ensembles are frequently used to assess understanding of the response of ozone and methane lifetime to changes in emissions of ozone precursors such as NOx, VOCs (volatile organic compounds) and CO. When these ozone changes are used to calculate radiative forcing (RF) (and climate metrics such as the global warming potential (GWP) and global temperature-change potential (GTP)) there is a methodological choice, determined partly by the available computing resources, as to whether the mean ozone (and methane) concentration changes are input to the radiation code, or whether each model's ozone and methane changes are used as input, with the average RF computed from the individual model RFs. We use data from the Task Force on Hemispheric Transport of Air Pollution source–receptor global chemical transport model ensemble to assess the impact of this choice for emission changes in four regions (East Asia, Europe, North America and South Asia). We conclude that using the multi-model mean ozone and methane responses is accurate for calculating the mean RF, with differences up to 0.6% for CO, 0.7% for VOCs and 2% for NOx. Differences of up to 60% for NOx 7% for VOCs and 3% for CO are introduced into the 20 year GWP. The differences for the 20 year GTP are smaller than for the GWP for NOx, and similar for the other species. However, estimates of the standard deviation calculated from the ensemble-mean input fields (where the standard deviation at each point on the model grid is added to or subtracted from the mean field) are almost always substantially larger in RF, GWP and GTP metrics than the true standard deviation, and can be larger than the model range for short-lived ozone RF, and for the 20 and 100 year GWP and 100 year GTP. The order of averaging has most impact on the metrics for NOx, as the net values for these quantities is the residual of the sum of terms of opposing signs. For example, the standard deviation for the 20 year GWP is 2–3 times larger using the ensemble-mean fields than using the individual models to calculate the RF. The source of this effect is largely due to the construction of the input ozone fields, which overestimate the true ensemble spread. Hence, while the average of multi-model fields are normally appropriate for calculating mean RF, GWP and GTP, they are not a reliable method for calculating the uncertainty in these fields, and in general overestimate the uncertainty.