953 resultados para model complexity
Resumo:
The Wetland and Wetland CH4 Intercomparison of Models Project (WETCHIMP) was created to evaluate our present ability to simulate large-scale wetland characteristics and corresponding methane (CH4) emissions. A multi-model comparison is essential to evaluate the key uncertainties in the mechanisms and parameters leading to methane emissions. Ten modelling groups joined WETCHIMP to run eight global and two regional models with a common experimental protocol using the same climate and atmospheric carbon dioxide (CO2) forcing datasets. We reported the main conclusions from the intercomparison effort in a companion paper (Melton et al., 2013). Here we provide technical details for the six experiments, which included an equilibrium, a transient, and an optimized run plus three sensitivity experiments (temperature, precipitation, and atmospheric CO2 concentration). The diversity of approaches used by the models is summarized through a series of conceptual figures, and is used to evaluate the wide range of wetland extent and CH4 fluxes predicted by the models in the equilibrium run. We discuss relationships among the various approaches and patterns in consistencies of these model predictions. Within this group of models, there are three broad classes of methods used to estimate wetland extent: prescribed based on wetland distribution maps, prognostic relationships between hydrological states based on satellite observations, and explicit hydrological mass balances. A larger variety of approaches was used to estimate the net CH4 fluxes from wetland systems. Even though modelling of wetland extent and CH4 emissions has progressed significantly over recent decades, large uncertainties still exist when estimating CH4 emissions: there is little consensus on model structure or complexity due to knowledge gaps, different aims of the models, and the range of temporal and spatial resolutions of the models.
Resumo:
Radiocarbon production, solar activity, total solar irradiance (TSI) and solar-induced climate change are reconstructed for the Holocene (10 to 0 kyr BP), and TSI is predicted for the next centuries. The IntCal09/SHCal04 radiocarbon and ice core CO2 records, reconstructions of the geomagnetic dipole, and instrumental data of solar activity are applied in the Bern3D-LPJ, a fully featured Earth system model of intermediate complexity including a 3-D dynamic ocean, ocean sediments, and a dynamic vegetation model, and in formulations linking radiocarbon production, the solar modulation potential, and TSI. Uncertainties are assessed using Monte Carlo simulations and bounding scenarios. Transient climate simulations span the past 21 thousand years, thereby considering the time lags and uncertainties associated with the last glacial termination. Our carbon-cycle-based modern estimate of radiocarbon production of 1.7 atoms cm−2 s−1 is lower than previously reported for the cosmogenic nuclide production model by Masarik and Beer (2009) and is more in-line with Kovaltsov et al. (2012). In contrast to earlier studies, periods of high solar activity were quite common not only in recent millennia, but throughout the Holocene. Notable deviations compared to earlier reconstructions are also found on decadal to centennial timescales. We show that earlier Holocene reconstructions, not accounting for the interhemispheric gradients in radiocarbon, are biased low. Solar activity is during 28% of the time higher than the modern average (650 MeV), but the absolute values remain weakly constrained due to uncertainties in the normalisation of the solar modulation to instrumental data. A recently published solar activity–TSI relationship yields small changes in Holocene TSI of the order of 1 W m−2 with a Maunder Minimum irradiance reduction of 0.85 ± 0.16 W m−2. Related solar-induced variations in global mean surface air temperature are simulated to be within 0.1 K. Autoregressive modelling suggests a declining trend of solar activity in the 21st century towards average Holocene conditions.
Resumo:
Lyme disease Borrelia can infect humans and animals for months to years, despite the presence of an active host immune response. The vls antigenic variation system, which expresses the surface-exposed lipoprotein VlsE, plays a major role in B. burgdorferi immune evasion. Gene conversion between vls silent cassettes and the vlsE expression site occurs at high frequency during mammalian infection, resulting in sequence variation in the VlsE product. In this study, we examined vlsE sequence variation in B. burgdorferi B31 during mouse infection by analyzing 1,399 clones isolated from bladder, heart, joint, ear, and skin tissues of mice infected for 4 to 365 days. The median number of codon changes increased progressively in C3H/HeN mice from 4 to 28 days post infection, and no clones retained the parental vlsE sequence at 28 days. In contrast, the decrease in the number of clones with the parental vlsE sequence and the increase in the number of sequence changes occurred more gradually in severe combined immunodeficiency (SCID) mice. Clones containing a stop codon were isolated, indicating that continuous expression of full-length VlsE is not required for survival in vivo; also, these clones continued to undergo vlsE recombination. Analysis of clones with apparent single recombination events indicated that recombinations into vlsE are nonselective with regard to the silent cassette utilized, as well as the length and location of the recombination event. Sequence changes as small as one base pair were common. Fifteen percent of recovered vlsE variants contained "template-independent" sequence changes, which clustered in the variable regions of vlsE. We hypothesize that the increased frequency and complexity of vlsE sequence changes observed in clones recovered from immunocompetent mice (as compared with SCID mice) is due to rapid clearance of relatively invariant clones by variable region-specific anti-VlsE antibody responses.
Resumo:
Intensity modulated radiation therapy (IMRT) is a technique that delivers a highly conformal dose distribution to a target volume while attempting to maximally spare the surrounding normal tissues. IMRT is a common treatment modality used for treating head and neck (H&N) cancers, and the presence of many critical structures in this region requires accurate treatment delivery. The Radiological Physics Center (RPC) acts as both a remote and on-site quality assurance agency that credentials institutions participating in clinical trials. To date, about 30% of all IMRT participants have failed the RPC’s remote audit using the IMRT H&N phantom. The purpose of this project is to evaluate possible causes of H&N IMRT delivery errors observed by the RPC, specifically IMRT treatment plan complexity and the use of improper dosimetry data from machines that were thought to be matched but in reality were not. Eight H&N IMRT plans with a range of complexity defined by total MU (1460-3466), number of segments (54-225), and modulation complexity scores (MCS) (0.181-0.609) were created in Pinnacle v.8m. These plans were delivered to the RPC’s H&N phantom on a single Varian Clinac. One of the IMRT plans (1851 MU, 88 segments, and MCS=0.469) was equivalent to the median H&N plan from 130 previous RPC H&N phantom irradiations. This average IMRT plan was also delivered on four matched Varian Clinac machines and the dose distribution calculated using a different 6MV beam model. Radiochromic film and TLD within the phantom were used to analyze the dose profiles and absolute doses, respectively. The measured and calculated were compared to evaluate the dosimetric accuracy. All deliveries met the RPC acceptance criteria of ±7% absolute dose difference and 4 mm distance-to-agreement (DTA). Additionally, gamma index analysis was performed for all deliveries using a ±7%/4mm and ±5%/3mm criteria. Increasing the treatment plan complexity by varying the MU, number of segments, or varying the MCS resulted in no clear trend toward an increase in dosimetric error determined by the absolute dose difference, DTA, or gamma index. Varying the delivery machines as well as the beam model (use of a Clinac 6EX 6MV beam model vs. Clinac 21EX 6MV model), also did not show any clear trend towards an increased dosimetric error using the same criteria indicated above.
Resumo:
The sensitivity of the neodymium isotopic composition (ϵNd) to tectonic rearrangements of seaways is investigated using an Earth System Model of Intermediate Complexity. The shoaling and closure of the Central American Seaway (CAS) is simulated, as well as the opening and deepening of Drake Passage (DP). Multiple series of equilibrium simulations with various intermediate depths are performed for both seaways, providing insight into ϵNd and circulation responses to progressive throughflow evolutions. Furthermore, the sensitivity of these responses to the Atlantic Meridional Overturning Circulation (AMOC) and the neodymium boundary source is examined. Modeled ϵNd changes are compared to sediment core and ferromanganese (Fe-Mn) crust data. The model results indicate that the North Atlantic ϵNd response to the CAS shoaling is highly dependent on the AMOC state, i.e., on the AMOC strength before the shoaling to shallow depths (preclosure). Three scenarios based on different AMOC forcings are discussed, of which the model-data agreement favors a shallow preclosure (Miocene) AMOC (∼6 Sv). The DP opening causes a rather complex circulation response, resulting in an initial South Atlantic ϵNd decrease preceding a larger increase. This feature may be specific to our model setup, which induces a vigorous CAS throughflow that is strongly anticorrelated to the DP throughflow. In freshwater experiments following the DP deepening, ODP Site 1090 is mainly influenced by AMOC and DP throughflow changes, while ODP Site 689 is more strongly influenced by Southern Ocean Meridional Overturning Circulation and CAS throughflow changes. The boundary source uncertainty is largest for shallow seaways and at shallow sites.
Resumo:
The evolution of the Atlantic Meridional Overturning Circulation (MOC) in 30 models of varying complexity is examined under four distinct Representative Concentration Pathways. The models include 25 Atmosphere-Ocean General Circulation Models (AOGCMs) or Earth System Models (ESMs) that submitted simulations in support of the 5th phase of the Coupled Model Intercomparison Project (CMIP5) and 5 Earth System Models of Intermediate Complexity (EMICs). While none of the models incorporated the additional effects of ice sheet melting, they all projected very similar behaviour during the 21st century. Over this period the strength of MOC reduced by a best estimate of 22% (18%–25%; 5%–95% confidence limits) for RCP2.6, 26% (23%–30%) for RCP4.5, 29% (23%–35%) for RCP6.0 and 40% (36%–44%) for RCP8.5. Two of the models eventually realized a slow shutdown of the MOC under RCP8.5, although no model exhibited an abrupt change of the MOC. Through analysis of the freshwater flux across 30°–32°S into the Atlantic, it was found that 40% of the CMIP5 models were in a bistable regime of the MOC for the duration of their RCP integrations. The results support previous assessments that it is very unlikely that the MOC will undergo an abrupt change to an off state as a consequence of global warming.
Resumo:
Species adapted to cold-climatic mountain environments are expected to face a high risk of range contractions, if not local extinctions under climate change. Yet, the populations of many endothermic species may not be primarily affected by physiological constraints, but indirectly by climate-induced changes of habitat characteristics. In mountain forests, where vertebrate species largely depend on vegetation composition and structure, deteriorating habitat suitability may thus be mitigated or even compensated by habitat management aiming at compositional and structural enhancement. We tested this possibility using four cold-adapted bird species with complementary habitat requirements as model organisms. Based on species data and environmental information collected in 300 1-km2 grid cells distributed across four mountain ranges in central Europe, we investigated (1) how species’ occurrence is explained by climate, landscape, and vegetation, (2) to what extent climate change and climate-induced vegetation changes will affect habitat suitability, and (3) whether these changes could be compensated by adaptive habitat management. Species presence was modelled as a function of climate, landscape and vegetation variables under current climate; moreover, vegetation-climate relationships were assessed. The models were extrapolated to the climatic conditions of 2050, assuming the moderate IPCC-scenario A1B, and changes in species’ occurrence probability were quantified. Finally, we assessed the maximum increase in occurrence probability that could be achieved by modifying one or multiple vegetation variables under altered climate conditions. Climate variables contributed significantly to explaining species occurrence, and expected climatic changes, as well as climate-induced vegetation trends, decreased the occurrence probability of all four species, particularly at the low-altitudinal margins of their distribution. These effects could be partly compensated by modifying single vegetation factors, but full compensation would only be achieved if several factors were changed in concert. The results illustrate the possibilities and limitations of adaptive species conservation management under climate change.
Resumo:
The potential and adaptive flexibility of population dynamic P-systems (PDP) to study population dynamics suggests that they may be suitable for modelling complex fluvial ecosystems, characterized by a composition of dynamic habitats with many variables that interact simultaneously. Using as a model a reservoir occupied by the zebra mussel Dreissena polymorpha, we designed a computational model based on P systems to study the population dynamics of larvae, in order to evaluate management actions to control or eradicate this invasive species. The population dynamics of this species was simulated under different scenarios ranging from the absence of water flow change to a weekly variation with different flow rates, to the actual hydrodynamic situation of an intermediate flow rate. Our results show that PDP models can be very useful tools to model complex, partially desynchronized, processes that work in parallel. This allows the study of complex hydroecological processes such as the one presented, where reproductive cycles, temperature and water dynamics are involved in the desynchronization of the population dynamics both, within areas and among them. The results obtained may be useful in the management of other reservoirs with similar hydrodynamic situations in which the presence of this invasive species has been documented.
Resumo:
In this paper, we present the evaluation design for a complex multilevel program recently introduced in Switzerland. The evaluation embraces the federal level, the cantonal program level, and the project level where target groups are directly addressed. We employ Pawson and Tilley’s realist evaluation approach, in order to do justice to the varying context factors that impact the cantonal programs leading to varying effectiveness of the implemented activities. The application of the model to the canton of Uri shows that the numerous vertical and horizontal relations play a crucial role for the program’s effectiveness. As a general learning for the evaluation of complex programs, we state that there is a need to consider all affected levels of a program and that no monocausal effects can be singled out in programs where multiple interventions address the same problem. Moreover, considering all affected levels of a program can mean going beyond the borders of the actual program organization and including factors that do not directly interfere with the policy delivery as such. In particular, we found that the relationship between the cantonal and the federal level was a crucial organizational factor influencing the effectiveness of the cantonal program.
High-resolution microarray analysis of chromosome 20q in human colon cancer metastasis model systems
Resumo:
Amplification of human chromosome 20q DNA is the most frequently occurring chromosomal abnormality detected in sporadic colorectal carcinomas and shows significant correlation with liver metastases. Through comprehensive high-resolution microarray comparative genomic hybridization and microarray gene expression profiling, we have characterized chromosome 20q amplicon genes associated with human colorectal cancer metastasis in two in vitro metastasis model systems. The results revealed increasing complexity of the 20q genomic profile from the primary tumor-derived cell lines to the lymph node and liver metastasis derived cell lines. Expression analysis of chromosome 20q revealed a subset of over expressed genes residing within the regions of genomic copy number gain in all the tumor cell lines, suggesting these are Chromosome 20q copy number responsive genes. Bases on their preferential expression levels in the model system cell lines and known biological function, four of the over expressed genes mapping to the common intervals of genomic copy gain were considered the most promising candidate colorectal metastasis-associated genes. Validation of genomic copy number and expression array data was carried out on these genes, with one gene, DNMT3B, standing out as expressed at a relatively higher levels in the metastasis-derived cell lines compared with their primary-derived counterparts in both the models systems analyzed. The data provide evidence for the role of chromosome 20q genes with low copy gain and elevated expression in the clonal evolution of metastatic cells and suggests that such genes may serve as early biomarkers of metastatic potential. The data also support the utility of the combined microarray comparative genomic hybridization and expression array analysis for identifying copy number responsive genes in areas of low DNA copy gain in cancer cells. ^
Resumo:
Fast-flowing ice streams discharge most of the ice from the interior of the Antarctic Ice Sheet coastward. Understanding how their tributary organisation is governed and evolves is essential for developing reliable models of the ice sheet's response to climate change. Despite much research on ice-stream mechanics, this problem is unsolved, because the complexity of flow within and across the tributary networks has hardly been interrogated. Here I present the first map of planimetric flow convergence across the ice sheet, calculated from satellite measurements of ice surface velocity, and use it to explore this complexity. The convergence map of Antarctica elucidates how ice-stream tributaries draw ice from the interior. It also reveals curvilinear zones of convergence along lateral shear margins of streaming, and abundant convergence ripples associated with nonlinear ice rheology and changes in bed topography and friction. Flow convergence on ice-stream tributaries and their feeding zones is markedly uneven, and interspersed with divergence at distances of the order of kilometres. For individual drainage basins as well as the ice sheet as a whole, the range of convergence and divergence decreases systematically with flow speed, implying that fast flow cannot converge or diverge as much as slow flow. I therefore deduce that flow in ice-stream networks is subject to mechanical regulation that limits flow-orthonormal strain rates. These properties and the gridded data of convergence and flow-orthonormal strain rate in this archive provide targets for ice- sheet simulations and motivate more research into the origin and dynamics of tributarization.
Resumo:
Runtime management of distributed information systems is a complex and costly activity. One of the main challenges that must be addressed is obtaining a complete and updated view of all the managed runtime resources. This article presents a monitoring architecture for heterogeneous and distributed information systems. It is composed of two elements: an information model and an agent infrastructure. The model negates the complexity and variability of these systems and enables the abstraction over non-relevant details. The infrastructure uses this information model to monitor and manage the modeled environment, performing and detecting changes in execution time. The agents infrastructure is further detailed and its components and the relationships between them are explained. Moreover, the proposal is validated through a set of agents that instrument the JEE Glassfish application server, paying special attention to support distributed configuration scenarios.
Resumo:
Amundsenisen is an ice field, 80 km2 in area, located in Southern Spitsbergen, Svalbard. Radio-echo sounding measurements at 20 MHz show high intensity returns from a nearly flat basal reflector at four zones, all of them with ice thickness larger than 500m. These reflections suggest possible subglacial lakes. To determine whether basal liquid water is compatible with current pressure and temperature conditions, we aim at applying a thermo mechanical model with a free boundary at the bed defined as solution of a Stefan problem for the interface ice-subglaciallake. The complexity of the problem suggests the use of a bi-dimensional model, but this requires that well-defined flowlines across the zones with suspected subglacial lakes are available. We define these flow lines from the solution of a three-dimensional dynamical model, and this is the main goal of the present contribution. We apply a three-dimensional full-Stokes model of glacier dynamics to Amundsenisen icefield. We are mostly interested in the plateau zone of the icefield, so we introduce artificial vertical boundaries at the heads of the main outlet glaciers draining Amundsenisen. At these boundaries we set velocity boundary conditions. Velocities near the centres of the heads of the outlets are known from experimental measurements. The velocities at depth are calculated according to a SIA velocity-depth profile, and those at the rest of the transverse section are computed following Nye’s (1952) model. We select as southeastern boundary of the model domain an ice divide, where we set boundary conditions of zero horizontal velocities and zero vertical shear stresses. The upper boundary is a traction-free boundary. For the basal boundary conditions, on the zones of suspected subglacial lakes we set free-slip boundary conditions, while for the rest of the basal boundary we use a friction law linking the sliding velocity to the basal shear stress,in such a way that, contrary to the shallow ice approximation, the basal shear stress is not equal to the basal driving stress but rather part of the solution.
Resumo:
Adaptive systems use feedback as a key strategy to cope with uncertainty and change in their environments. The information fed back from the sensorimotor loop into the control architecture can be used to change different elements of the controller at four different levels: parameters of the control model, the control model itself, the functional organization of the agent and the functional components of the agent. The complexity of such a space of potential configurations is daunting. The only viable alternative for the agent ?in practical, economical, evolutionary terms? is the reduction of the dimensionality of the configuration space. This reduction is achieved both by functionalisation —or, to be more precise, by interface minimization— and by patterning, i.e. the selection among a predefined set of organisational configurations. This last analysis let us state the central problem of how autonomy emerges from the integration of the cognitive, emotional and autonomic systems in strict functional terms: autonomy is achieved by the closure of functional dependency. In this paper we will show a general model of how the emotional biological systems operate following this theoretical analysis and how this model is also of applicability to a wide spectrum of artificial systems.
Resumo:
Objective The neurodevelopmental–neurodegenerative debate is a basic issue in the field of the neuropathological basis of schizophrenia (SCH). Neurophysiological techniques have been scarcely involved in such debate, but nonlinear analysis methods may contribute to it. Methods Fifteen patients (age range 23–42 years) matching DSM IV-TR criteria for SCH, and 15 sex- and age-matched control subjects (age range 23–42 years) underwent a resting-state magnetoencephalographic evaluation and Lempel–Ziv complexity (LZC) scores were calculated. Results Regression analyses indicated that LZC values were strongly dependent on age. Complexity scores increased as a function of age in controls, while SCH patients exhibited a progressive reduction of LZC values. A logistic model including LZC scores, age and the interaction of both variables allowed the classification of patients and controls with high sensitivity and specificity. Conclusions Results demonstrated that SCH patients failed to follow the “normal” process of complexity increase as a function of age. In addition, SCH patients exhibited a significant reduction of complexity scores as a function of age, thus paralleling the pattern observed in neurodegenerative diseases. Significance Our results support the notion of a progressive defect in SCH, which does not contradict the existence of a basic neurodevelopmental alteration. Highlights ► Schizophrenic patients show higher complexity values as compared to controls. ► Schizophrenic patients showed a tendency to reduced complexity values as a function of age while controls showed the opposite tendency. ► The tendency observed in schizophrenic patients parallels the tendency observed in Alzheimer disease patients.