157 resultados para Simple overlap model
Resumo:
This paper highlights the key role played by solubility in influencing gelation and demonstrates that many facets of the gelation process depend on this vital parameter. In particular, we relate thermal stability (T-gel) and minimum gelation concentration (MGC) values of small-molecule gelation in terms of the solubility and cooperative self-assembly of gelator building blocks. By employing a van't Hoff analysis of solubility data, determined from simple NMR measurements, we are able to generate T-calc values that reflect the calculated temperature for complete solubilization of the networked gelator. The concentration dependence of T-calc allows the previously difficult to rationalize "plateau-region" thermal stability values to be elucidated in terms of gelator molecular design. This is demonstrated for a family of four gelators with lysine units attached to each end of an aliphatic diamine, with different peripheral groups (Z or Bee) in different locations on the periphery of the molecule. By tuning the peripheral protecting groups of the gelators, the solubility of the system is modified, which in turn controls the saturation point of the system and hence controls the concentration at which network formation takes place. We report that the critical concentration (C-crit) of gelator incorporated into the solid-phase sample-spanning network within the gel is invariant of gelator structural design. However, because some systems have higher solubilities, they are less effective gelators and require the application of higher total concentrations to achieve gelation, hence shedding light on the role of the MGC parameter in gelation. Furthermore, gelator structural design also modulates the level of cooperative self-assembly through solubility effects, as determined by applying a cooperative binding model to NMR data. Finally, the effect of gelator chemical design on the spatial organization of the networked gelator was probed by small-angle neutron and X-ray scattering (SANS/SAXS) on the native gel, and a tentative self-assembly model was proposed.
Resumo:
The Rose Review into the teaching of early reading recommended that the conceptual framework incorporated into the National Literacy Strategy Framework for Teaching – the Searchlights model of reading and its development – should be replaced by the Simple View of Reading. In this paper, we demonstrate how these two frameworks relate to each other, and show that nothing has been lost in this transformation from Searchlights to Simple View: on the contrary, much has been gained. That nothing has been lost is demonstrated by consideration of the underlying complexity inherent in each of the two dimensions delineated in the Simple View. That much has been gained is demonstrated by the increased understanding of each dimension that follows from careful scientific investigation of each. The better we understand what is involved in each dimension, the better placed we are to unravel and understand the essential, complex and continual interactions between each dimension which underlie skilled reading. This has clear implications for further improving the early teaching of reading.
Resumo:
The evolvability of a software artifact is its capacity for producing heritable or reusable variants; the inverse quality is the artifact's inertia or resistance to evolutionary change. Evolvability in software systems may arise from engineering and/or self-organising processes. We describe our 'Conditional Growth' simulation model of software evolution and show how, it can be used to investigate evolvability from a self-organisation perspective. The model is derived from the Bak-Sneppen family of 'self-organised criticality' simulations. It shows good qualitative agreement with Lehman's 'laws of software evolution' and reproduces phenomena that have been observed empirically. The model suggests interesting predictions about the dynamics of evolvability and implies that much of the observed variability in software evolution can be accounted for by comparatively simple self-organising processes.
Resumo:
Three simple climate models (SCMs) are calibrated using simulations from atmosphere ocean general circulation models (AOGCMs). In addition to using two conventional SCMs, results from a third simpler model developed specifically for this study are obtained. An easy to implement and comprehensive iterative procedure is applied that optimises the SCM emulation of global-mean surface temperature and total ocean heat content, and, if available in the SCM, of surface temperature over land, over the ocean and in both hemispheres, and of the global-mean ocean temperature profile. The method gives best-fit estimates as well as uncertainty intervals for the different SCM parameters. For the calibration, AOGCM simulations with two different types of forcing scenarios are used: pulse forcing simulations performed with 2 AOGCMs and gradually changing forcing simulations from 15 AOGCMs obtained within the framework of the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. The method is found to work well. For all possible combinations of SCMs and AOGCMs the emulation of AOGCM results could be improved. The obtained SCM parameters depend both on the AOGCM data and the type of forcing scenario. SCMs with a poor representation of the atmosphere thermal inertia are better able to emulate AOGCM results from gradually changing forcing than from pulse forcing simulations. Correct simultaneous emulation of both atmospheric temperatures and the ocean temperature profile by the SCMs strongly depends on the representation of the temperature gradient between the atmosphere and the mixed layer. Introducing climate sensitivities that are dependent on the forcing mechanism in the SCMs allows the emulation of AOGCM responses to carbon dioxide and solar insolation forcings equally well. Also, some SCM parameters are found to be very insensitive to the fitting, and the reduction of their uncertainty through the fitting procedure is only marginal, while other parameters change considerably. The very simple SCM is found to reproduce the AOGCM results as well as the other two comparably more sophisticated SCMs.
Resumo:
The Cambridge Tropospheric Trajectory model of Chemistry and Transport (CiTTyCAT), a Lagrangian chemistry model, has been evaluated using atmospheric chemical measurements collected during the East Atlantic Summer Experiment 1996 (EASE '96). This field campaign was part of the UK Natural Environment Research Council's (NERC) Atmospheric Chemistry Studies in the Oceanic Environment (ACSOE) programme, conducted at Mace Head, Republic of Ireland, during July and August 1996. The model includes a description of gas-phase tropospheric chemistry, and simple parameterisations for surface deposition, mixing from the free troposphere and emissions. The model generally compares well with the measurements and is used to study the production and loss of O3 under a variety of conditions. The mean difference between the hourly O3 concentrations calculated by the model and those measured is 0.6 ppbv with a standard deviation of 8.7 ppbv. Three specific air-flow regimes were identified during the campaign – westerly, anticyclonic (easterly) and south westerly. The westerly flow is typical of background conditions for Mace Head. However, on some occasions there was evidence of long-range transport of pollutants from North America. In periods of anticyclonic flow, air parcels had collected emissions of NOx and VOCs immediately before arriving at Mace Head, leading to O3 production. The level of calculated O3 depends critically on the precise details of the trajectory, and hence on the emissions into the air parcel. In several periods of south westerly flow, low concentrations of O3 were measured which were consistent with deposition and photochemical destruction inside the tropical marine boundary layer.
Resumo:
We present a novel kinetic multi-layer model that explicitly resolves mass transport and chemical reaction at the surface and in the bulk of aerosol particles (KM-SUB). The model is based on the PRA framework of gas-particle interactions (Poschl-Rudich-Ammann, 2007), and it includes reversible adsorption, surface reactions and surface-bulk exchange as well as bulk diffusion and reaction. Unlike earlier models, KM-SUB does not require simplifying assumptions about steady-state conditions and radial mixing. The temporal evolution and concentration profiles of volatile and non-volatile species at the gas-particle interface and in the particle bulk can be modeled along with surface concentrations and gas uptake coefficients. In this study we explore and exemplify the effects of bulk diffusion on the rate of reactive gas uptake for a simple reference system, the ozonolysis of oleic acid particles, in comparison to experimental data and earlier model studies. We demonstrate how KM-SUB can be used to interpret and analyze experimental data from laboratory studies, and how the results can be extrapolated to atmospheric conditions. In particular, we show how interfacial and bulk transport, i.e., surface accommodation, bulk accommodation and bulk diffusion, influence the kinetics of the chemical reaction. Sensitivity studies suggest that in fine air particulate matter oleic acid and compounds with similar reactivity against ozone (carbon-carbon double bonds) can reach chemical lifetimes of many hours only if they are embedded in a (semi-)solid matrix with very low diffusion coefficients (< 10(-10) cm(2) s(-1)). Depending on the complexity of the investigated system, unlimited numbers of volatile and non-volatile species and chemical reactions can be flexibly added and treated with KM-SUB. We propose and intend to pursue the application of KM-SUB as a basis for the development of a detailed master mechanism of aerosol chemistry as well as for the derivation of simplified but realistic parameterizations for large-scale atmospheric and climate models.
Resumo:
We present a novel kinetic multi-layer model that explicitly resolves mass transport and chemical reaction at the surface and in the bulk of aerosol particles (KM-SUB). The model is based on the PRA framework of gas–particle interactions (P¨oschl et al., 5 2007), and it includes reversible adsorption, surface reactions and surface-bulk exchange as well as bulk diffusion and reaction. Unlike earlier models, KM-SUB does not require simplifying assumptions about steady-state conditions and radial mixing. The temporal evolution and concentration profiles of volatile and non-volatile species at the gas-particle interface and in the particle bulk can be modeled along with surface 10 concentrations and gas uptake coefficients. In this study we explore and exemplify the effects of bulk diffusion on the rate of reactive gas uptake for a simple reference system, the ozonolysis of oleic acid particles, in comparison to experimental data and earlier model studies. We demonstrate how KM-SUB can be used to interpret and analyze experimental data from laboratory stud15 ies, and how the results can be extrapolated to atmospheric conditions. In particular, we show how interfacial transport and bulk transport, i.e., surface accommodation, bulk accommodation and bulk diffusion, influence the kinetics of the chemical reaction. Sensitivity studies suggest that in fine air particulate matter oleic acid and compounds with similar reactivity against ozone (C=C double bonds) can reach chemical lifetimes of 20 multiple hours only if they are embedded in a (semi-)solid matrix with very low diffusion coefficients (10−10 cm2 s−1). Depending on the complexity of the investigated system, unlimited numbers of volatile and non-volatile species and chemical reactions can be flexibly added and treated with KM-SUB. We propose and intend to pursue the application of KM-SUB 25 as a basis for the development of a detailed master mechanism of aerosol chemistry as well as for the derivation of simplified but realistic parameterizations for large-scale atmospheric and climate models.
Resumo:
A common problem in many data based modelling algorithms such as associative memory networks is the problem of the curse of dimensionality. In this paper, a new two-stage neurofuzzy system design and construction algorithm (NeuDeC) for nonlinear dynamical processes is introduced to effectively tackle this problem. A new simple preprocessing method is initially derived and applied to reduce the rule base, followed by a fine model detection process based on the reduced rule set by using forward orthogonal least squares model structure detection. In both stages, new A-optimality experimental design-based criteria we used. In the preprocessing stage, a lower bound of the A-optimality design criterion is derived and applied as a subset selection metric, but in the later stage, the A-optimality design criterion is incorporated into a new composite cost function that minimises model prediction error as well as penalises the model parameter variance. The utilisation of NeuDeC leads to unbiased model parameters with low parameter variance and the additional benefit of a parsimonious model structure. Numerical examples are included to demonstrate the effectiveness of this new modelling approach for high dimensional inputs.
Resumo:
This paper describes a computational and statistical study of the influence of morphological changes on the electrophysiological response of neurons from an animal model of Alzheimer's Disease (AD). We combined experimental morphological data from rat hippocampal CA1 pyramidal cells with a well-established model of active membrane properties. Dendritic morphology and the somatic response to simulated current clamp conditions were then compared for cells from the control and the AD group. The computational approach allowed us to single out the influences of neuromorphology on neuronal response by eliminating the effects of active channel variability. The results did not reveal a simple relationship between morphological changes associated with AD and changes in neural response. However, they did suggest the existence of more complex than anticipated relationships between dendritic morphology and single-cell electrophysiology.
Resumo:
We compared output from 3 dynamic process-based models (DMs: ECOSSE, MILLENNIA and the Durham Carbon Model) and 9 bioclimatic envelope models (BCEMs; including BBOG ensemble and PEATSTASH) ranging from simple threshold to semi-process-based models. Model simulations were run at 4 British peatland sites using historical climate data and climate projections under a medium (A1B) emissions scenario from the 11-RCM (regional climate model) ensemble underpinning UKCP09. The models showed that blanket peatlands are vulnerable to projected climate change; however, predictions varied between models as well as between sites. All BCEMs predicted a shift from presence to absence of a climate associated with blanket peat, where the sites with the lowest total annual precipitation were closest to the presence/absence threshold. DMs showed a more variable response. ECOSSE predicted a decline in net C sink and shift to net C source by the end of this century. The Durham Carbon Model predicted a smaller decline in the net C sink strength, but no shift to net C source. MILLENNIA predicted a slight overall increase in the net C sink. In contrast to the BCEM projections, the DMs predicted that the sites with coolest temperatures and greatest total annual precipitation showed the largest change in carbon sinks. In this model inter-comparison, the greatest variation in model output in response to climate change projections was not between the BCEMs and DMs but between the DMs themselves, because of different approaches to modelling soil organic matter pools and decomposition amongst other processes. The difference in the sign of the response has major implications for future climate feedbacks, climate policy and peatland management. Enhanced data collection, in particular monitoring peatland response to current change, would significantly improve model development and projections of future change.
Resumo:
A simple and coherent framework for partitioning uncertainty in multi-model climate ensembles is presented. The analysis of variance (ANOVA) is used to decompose a measure of total variation additively into scenario uncertainty, model uncertainty and internal variability. This approach requires fewer assumptions than existing methods and can be easily used to quantify uncertainty related to model-scenario interaction - the contribution to model uncertainty arising from the variation across scenarios of model deviations from the ensemble mean. Uncertainty in global mean surface air temperature is quantified as a function of lead time for a subset of the Coupled Model Intercomparison Project phase 3 ensemble and results largely agree with those published by other authors: scenario uncertainty dominates beyond 2050 and internal variability remains approximately constant over the 21st century. Both elements of model uncertainty, due to scenario-independent and scenario-dependent deviations from the ensemble mean, are found to increase with time. Estimates of model deviations that arise as by-products of the framework reveal significant differences between models that could lead to a deeper understanding of the sources of uncertainty in multi-model ensembles. For example, three models are shown diverging pattern over the 21st century, while another model exhibits an unusually large variation among its scenario-dependent deviations.
Resumo:
In a recent study, Williams introduced a simple modification to the widely used Robert–Asselin (RA) filter for numerical integration. The main purpose of the Robert–Asselin–Williams (RAW) filter is to avoid the undesired numerical damping of the RA filter and to increase the accuracy. In the present paper, the effects of the modification are comprehensively evaluated in the Simplified Parameterizations, Primitive Equation Dynamics (SPEEDY) atmospheric general circulation model. First, the authors search for significant changes in the monthly climatology due to the introduction of the new filter. After testing both at the local level and at the field level, no significant changes are found, which is advantageous in the sense that the new scheme does not require a retuning of the parameterized model physics. Second, the authors examine whether the new filter improves the skill of short- and medium-term forecasts. January 1982 data from the NCEP–NCAR reanalysis are used to evaluate the forecast skill. Improvements are found in all the model variables (except the relative humidity, which is hardly changed). The improvements increase with lead time and are especially evident in medium-range forecasts (96–144 h). For example, in tropical surface pressure predictions, 5-day forecasts made using the RAW filter have approximately the same skill as 4-day forecasts made using the RA filter. The results of this work are encouraging for the implementation of the RAW filter in other models currently using the RA filter.
Resumo:
The problem of state estimation occurs in many applications of fluid flow. For example, to produce a reliable weather forecast it is essential to find the best possible estimate of the true state of the atmosphere. To find this best estimate a nonlinear least squares problem has to be solved subject to dynamical system constraints. Usually this is solved iteratively by an approximate Gauss–Newton method where the underlying discrete linear system is in general unstable. In this paper we propose a new method for deriving low order approximations to the problem based on a recently developed model reduction method for unstable systems. To illustrate the theoretical results, numerical experiments are performed using a two-dimensional Eady model – a simple model of baroclinic instability, which is the dominant mechanism for the growth of storms at mid-latitudes. It is a suitable test model to show the benefit that may be obtained by using model reduction techniques to approximate unstable systems within the state estimation problem.
Resumo:
Using a literature review, we argue that new models of peatland development are needed. Many existing models do not account for potentially important ecohydrological feedbacks, and/or ignore spatial structure and heterogeneity. Existing models, including those that simulate a near total loss of the northern peatland carbon store under a warming climate, may produce misleading results because they rely upon oversimplified representations of ecological and hydrological processes. In this, the first of a pair of papers, we present the conceptual framework for a model of peatland development, DigiBog, which considers peatlands as complex adaptive systems. DigiBog accounts for the interactions between the processes which govern litter production and peat decay, peat soil hydraulic properties, and peatland water-table behaviour, in a novel and genuinely ecohydrological manner. DigiBog consists of a number of interacting submodels, each representing a different aspect of peatland ecohydrology. Here we present in detail the mathematical and computational basis, as well as the implementation and testing, of the hydrological submodel. Remaining submodels are described and analysed in the accompanying paper. Tests of the hydrological submodel against analytical solutions for simple aquifers were highly successful: the greatest deviation between DigiBog and the analytical solutions was 2·83%. We also applied the hydrological submodel to irregularly shaped aquifers with heterogeneous hydraulic properties—situations for which no analytical solutions exist—and found the model's outputs to be plausible.
Resumo:
The phenolic fractions released during hydrothermal treatment of selected feedstocks (corn cobs, eucalypt wood chips, almond shells, chestnut burs, and white grape pomace) were selectively recovered by extraction with ethyl acetate and washed with ethanol/water solutions. The crude extracts were purified by a relatively simple adsorption technique using a commercial polymeric, nonionic resin. Utilization of 96% ethanol as eluting agent resulted in 47.0-72.6% phenolic desorption, yielding refined products containing 49-60% w/w phenolics (corresponding to 30-58% enrichment with respect to the crude extracts). The refined extracts produced from grape pomace and from chestnut burs were suitable for protecting bulk oil and oil-in-water and water-in-oil emulsions. A synergistic action with bovine serum albumin in the emulsions was observed.