44 resultados para Negative stiffness structure, snap through, elastomers, hyperelastic model, root cause analysis
Resumo:
The aim of this study was, within a sensitivity analysis framework, to determine if additional model complexity gives a better capability to model the hydrology and nitrogen dynamics of a small Mediterranean forested catchment or if the additional parameters cause over-fitting. Three nitrogen-models of varying hydrological complexity were considered. For each model, general sensitivity analysis (GSA) and Generalized Likelihood Uncertainty Estimation (GLUE) were applied, each based on 100,000 Monte Carlo simulations. The results highlighted the most complex structure as the most appropriate, providing the best representation of the non-linear patterns observed in the flow and streamwater nitrate concentrations between 1999 and 2002. Its 5% and 95% GLUE bounds, obtained considering a multi-objective approach, provide the narrowest band for streamwater nitrogen, which suggests increased model robustness, though all models exhibit periods of inconsistent good and poor fits between simulated outcomes and observed data. The results confirm the importance of the riparian zone in controlling the short-term (daily) streamwater nitrogen dynamics in this catchment but not the overall flux of nitrogen from the catchment. It was also shown that as the complexity of a hydrological model increases over-parameterisation occurs, but the converse is true for a water quality model where additional process representation leads to additional acceptable model simulations. Water quality data help constrain the hydrological representation in process-based models. Increased complexity was justifiable for modelling river-system hydrochemistry. Increased complexity was justifiable for modelling river-system hydrochemistry.
Resumo:
A continuous tropospheric and stratospheric vertically resolved ozone time series, from 1850 to 2099, has been generated to be used as forcing in global climate models that do not include interactive chemistry. A multiple linear regression analysis of SAGE I+II satellite observations and polar ozonesonde measurements is used for the stratospheric zonal mean dataset during the well-observed period from 1979 to 2009. In addition to terms describing the mean annual cycle, the regression includes terms representing equivalent effective stratospheric chlorine (EESC) and the 11-yr solar cycle variability. The EESC regression fit coefficients, together with pre-1979 EESC values, are used to extrapolate the stratospheric ozone time series backward to 1850. While a similar procedure could be used to extrapolate into the future, coupled chemistry climate model (CCM) simulations indicate that future stratospheric ozone abundances are likely to be significantly affected by climate change, and capturing such effects through a regression model approach is not feasible. Therefore, the stratospheric ozone dataset is extended into the future (merged in 2009) with multimodel mean projections from 13 CCMs that performed a simulation until 2099 under the SRES (Special Report on Emission Scenarios) A1B greenhouse gas scenario and the A1 adjusted halogen scenario in the second round of the Chemistry-Climate Model Validation (CCMVal-2) Activity. The stratospheric zonal mean ozone time series is merged with a three-dimensional tropospheric data set extracted from simulations of the past by two CCMs (CAM3.5 and GISSPUCCINI)and of the future by one CCM (CAM3.5). The future tropospheric ozone time series continues the historical CAM3.5 simulation until 2099 following the four different Representative Concentration Pathways (RCPs). Generally good agreement is found between the historical segment of the ozone database and satellite observations, although it should be noted that total column ozone is overestimated in the southern polar latitudes during spring and tropospheric column ozone is slightly underestimated. Vertical profiles of tropospheric ozone are broadly consistent with ozonesondes and in-situ measurements, with some deviations in regions of biomass burning. The tropospheric ozone radiative forcing (RF) from the 1850s to the 2000s is 0.23Wm−2, lower than previous results. The lower value is mainly due to (i) a smaller increase in biomass burning emissions; (ii) a larger influence of stratospheric ozone depletion on upper tropospheric ozone at high southern latitudes; and possibly (iii) a larger influence of clouds (which act to reduce the net forcing) compared to previous radiative forcing calculations. Over the same period, decreases in stratospheric ozone, mainly at high latitudes, produce a RF of −0.08Wm−2, which is more negative than the central Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) value of −0.05Wm−2, but which is within the stated range of −0.15 to +0.05Wm−2. The more negative value is explained by the fact that the regression model simulates significant ozone depletion prior to 1979, in line with the increase in EESC and as confirmed by CCMs, while the AR4 assumed no change in stratospheric RF prior to 1979. A negative RF of similar magnitude persists into the future, although its location shifts from high latitudes to the tropics. This shift is due to increases in polar stratospheric ozone, but decreases in tropical lower stratospheric ozone, related to a strengthening of the Brewer-Dobson circulation, particularly through the latter half of the 21st century. Differences in trends in tropospheric ozone among the four RCPs are mainly driven by different methane concentrations, resulting in a range of tropospheric ozone RFs between 0.4 and 0.1Wm−2 by 2100. The ozone dataset described here has been released for the Coupled Model Intercomparison Project (CMIP5) model simulations in netCDF Climate and Forecast (CF) Metadata Convention at the PCMDI website (http://cmip-pcmdi.llnl.gov/).
Resumo:
In this paper ensembles of forecasts (of up to six hours) are studied from a convection-permitting model with a representation of model error due to unresolved processes. The ensemble prediction system (EPS) used is an experimental convection-permitting version of the UK Met Office’s 24- member Global and Regional Ensemble Prediction System (MOGREPS). The method of representing model error variability, which perturbs parameters within the model’s parameterisation schemes, has been modified and we investigate the impact of applying this scheme in different ways. These are: a control ensemble where all ensemble members have the same parameter values; an ensemble where the parameters are different between members, but fixed in time; and ensembles where the parameters are updated randomly every 30 or 60 min. The choice of parameters and their ranges of variability have been determined from expert opinion and parameter sensitivity tests. A case of frontal rain over the southern UK has been chosen, which has a multi-banded rainfall structure. The consequences of including model error variability in the case studied are mixed and are summarised as follows. The multiple banding, evident in the radar, is not captured for any single member. However, the single band is positioned in some members where a secondary band is present in the radar. This is found for all ensembles studied. Adding model error variability with fixed parameters in time does increase the ensemble spread for near-surface variables like wind and temperature, but can actually decrease the spread of the rainfall. Perturbing the parameters periodically throughout the forecast does not further increase the spread and exhibits “jumpiness” in the spread at times when the parameters are perturbed. Adding model error variability gives an improvement in forecast skill after the first 2–3 h of the forecast for near-surface temperature and relative humidity. For precipitation skill scores, adding model error variability has the effect of improving the skill in the first 1–2 h of the forecast, but then of reducing the skill after that. Complementary experiments were performed where the only difference between members was the set of parameter values (i.e. no initial condition variability). The resulting spread was found to be significantly less than the spread from initial condition variability alone.
Resumo:
The technique of linear responsibility analysis is used for a retrospective case study of a private industrial development consisting of an extension to existing buildings to provide a warehouse, services block and packing line. The organizational structure adopted on the project is analysed using concepts from systems theory which are included in Walker's theoretical model of the structure of building project organizations (Walker, 1981). This model proposes that the process of building provision can be viewed as systems and subsystems which are differentiated from each other at decision points. Further to this, the subsystems can be viewed as the interaction of managing system and operating system. Using Walker's model, a systematic analysis of the relationships between the contributors gives a quantitative assessment of the efficacy of the organizational structure used. The causes of the client's dissatisfaction with the outcome of the project were lack of integration and complexity of the managing system. However, there was a high level of satisfaction with the completed project and this is reflected by the way in which the organization structure corresponded to the model's propositions.
Resumo:
The technique of linear responsibility analysis is used for a retrospective case study of a private development consisting of an extension to an existing building to provide a wholesale butchery facility. The project used a conventionally organized management process. The organization structure adopted on the project is analysed using concepts from the systems theory, which are included in Walkers theoretical model of the structure of building project organizations. This model proposes that the process of building provision can be viewed as systems and sub-systems that are differentiated from each other at decision points. Further to this, the sub-systems can be viewed as the interaction of managing system and operating system. Using Walkers model, a systematic analysis of the relationships between the contributors gives a quantitative assessment of the efficiency of the organizational structure used. The project's organization structure diverged from the models propositions resulting in delay to the project's completion and cost overrun but the client was satisfied with the project functionally.
Resumo:
Goal orientation is acknowledged as an important paradigm in requirements engineering. The structure of a goal-responsibility model provides opportunities for appraising the intention of a development. Creating a suitable model under agile constraints (time, incompleteness and catching up after an initial burst of creativity) can be challenging. Here we propose a marriage of UML activity diagrams with goal sketching in order to facilitate the production of goal responsibility models under these constraints.
Resumo:
Purpose – This paper proposes assessing the context within which integrated logistic support (ILS) can be implemented for whole life performance of building services systems. Design/methodology/approach – The use of ILS within a through-life business model (TLBM) is a better framework to achieve a well-designed, constructed and managed product. However, for ILS to be implemented in a TLBM for building services systems, the practices, tools and techniques need certain contextual prerequisites tailored to suit the construction industry. These contextual prerequisites are discussed. Findings – The case studies conducted reinforced the contextual importance of prime contracting, partnering and team collaboration for the application of ILS techniques. The lack of data was a major hindrance to the full realisation of ILS techniques within the case studies. Originality/value – The paper concludes with the recognition of the value of these contextual prerequisites for the use of ILS techniques within the building industry.
Resumo:
Surface pressure measurements, external reflection- Fourier transform infrared spectroscopy, and neutron re. flectivity have been used to investigate the lipid-binding behavior of three antimicrobial peptides: melittin, magainin II, and cecropin P1. As expected, all three cationic peptides were shown to interact more strongly with the anionic lipid, 1,2 dihexadecanoyl-sn-glycerol3-( phosphor-rac-( 1- glycerol)) ( DPPG), compared to the zwitterionic lipid, 1,2 dihexadecanoyl-sn-glycerol-3-phosphocholine ( DPPC). All three peptides have been shown to penetrate DPPC lipid layers by surface pressure, and this was confirmed for the melittin-DPPC interaction by neutron reflectivity measurements. Adsorption of peptide was, however, minimal, with a maximum of 0.4 mg m(-2) seen for melittin adsorption compared to 2.1 mg m(-2) for adsorption to DPPG ( from 0.7 mu M solution). The mode of binding to DPPG was shown to depend on the distribution of basic residues within the peptide alpha-helix, although in all cases adsorption below the lipid layer was shown to dominate over insertion within the layer. Melittin adsorption to DPPG altered the lipid layer structure observed through changes in the external reflection-Fourier transform infrared lipid spectra and neutron reflectivity. This lipid disruption was not observed for magainin or cecropin. In addition, melittin binding to both lipids was shown to be 50% greater than for either magainin or cecropin. Adsorption to the bare air-water interface was also investigated and surface activity followed the trend melittin. magainin. cecropin. External re. ection- Fourier transform infrared amide spectra revealed that melittin adopted a helical structure only in the presence of lipid, whereas magainin and cecropin adopted helical structure also at an airwater interface. This behavior has been related to the different charge distributions on the peptide amino acid sequences.
Resumo:
We introduce transreal analysis as a generalisation of real analysis. We find that the generalisation of the real exponential and logarithmic functions is well defined for all transreal numbers. Hence, we derive well defined values of all transreal powers of all non-negative transreal numbers. In particular, we find a well defined value for zero to the power of zero. We also note that the computation of products via the transreal logarithm is identical to the transreal product, as expected. We then generalise all of the common, real, trigonometric functions to transreal functions and show that transreal (sin x)/x is well defined everywhere. This raises the possibility that transreal analysis is total, in other words, that every function and every limit is everywhere well defined. If so, transreal analysis should be an adequate mathematical basis for analysing the perspex machine - a theoretical, super-Turing machine that operates on a total geometry. We go on to dispel all of the standard counter "proofs" that purport to show that division by zero is impossible. This is done simply by carrying the proof through in transreal arithmetic or transreal analysis. We find that either the supposed counter proof has no content or else that it supports the contention that division by zero is possible. The supposed counter proofs rely on extending the standard systems in arbitrary and inconsistent ways and then showing, tautologously, that the chosen extensions are not consistent. This shows only that the chosen extensions are inconsistent and does not bear on the question of whether division by zero is logically possible. By contrast, transreal arithmetic is total and consistent so it defeats any possible "straw man" argument. Finally, we show how to arrange that a function has finite or else unmeasurable (nullity) values, but no infinite values. This arithmetical arrangement might prove useful in mathematical physics because it outlaws naked singularities in all equations.
Resumo:
For a targeted observations case, the dependence of the size of the forecast impact on the targeted dropsonde observation error in the data assimilation is assessed. The targeted observations were made in the lee of Greenland; the dependence of the impact on the proximity of the observations to the Greenland coast is also investigated. Experiments were conducted using the Met Office Unified Model (MetUM), over a limited-area domain at 24-km grid spacing, with a four-dimensional variational data assimilation (4D-Var) scheme. Reducing the operational dropsonde observation errors by one-half increases the maximum forecast improvement from 5% to 7%–10%, measured in terms of total energy. However, the largest impact is seen by replacing two dropsondes on the Greenland coast with two farther from the steep orography; this increases the maximum forecast improvement from 5% to 18% for an 18-h forecast (using operational observation errors). Forecast degradation caused by two dropsonde observations on the Greenland coast is shown to arise from spreading of data by the background errors up the steep slope of Greenland. Removing boundary layer data from these dropsondes reduces the forecast degradation, but it is only a partial solution to this problem. Although only from one case study, these results suggest that observations positioned within a correlation length scale of steep orography may degrade the forecast through the anomalous upslope spreading of analysis increments along terrain-following model levels.
Resumo:
Enhanced release of CO2 to the atmosphere from soil organic carbon as a result of increased temperatures may lead to a positive feedback between climate change and the carbon cycle, resulting in much higher CO2 levels and accelerated lobal warming. However, the magnitude of this effect is uncertain and critically dependent on how the decomposition of soil organic C (heterotrophic respiration) responds to changes in climate. Previous studies with the Hadley Centre’s coupled climate–carbon cycle general circulation model (GCM) (HadCM3LC) used a simple, single-pool soil carbon model to simulate the response. Here we present results from numerical simulations that use the more sophisticated ‘RothC’ multipool soil carbon model, driven with the same climate data. The results show strong similarities in the behaviour of the two models, although RothC tends to simulate slightly smaller changes in global soil carbon stocks for the same forcing. RothC simulates global soil carbon stocks decreasing by 54 GtC by 2100 in a climate change simulation compared with an 80 GtC decrease in HadCM3LC. The multipool carbon dynamics of RothC cause it to exhibit a slower magnitude of transient response to both increased organic carbon inputs and changes in climate. We conclude that the projection of a positive feedback between climate and carbon cycle is robust, but the magnitude of the feedback is dependent on the structure of the soil carbon model.
Resumo:
We assessed the potential for using optical functional types as effective markers to monitor changes in vegetation in floodplain meadows associated with changes in their local environment. Floodplain meadows are challenging ecosystems for monitoring and conservation because of their highly biodiverse nature. Our aim was to understand and explain spectral differences among key members of floodplain meadows and also characterize differences with respect to functional traits. The study was conducted on a typical floodplain meadow in UK (MG4-type, mesotrophic grassland type 4, according to British National Vegetation Classification). We compared two approaches to characterize floodplain communities using field spectroscopy. The first approach was sub-community based, in which we collected spectral signatures for species groupings indicating two distinct eco-hydrological conditions (dry and wet soil indicator species). The other approach was “species-specific”, in which we focused on the spectral reflectance of three key species found on the meadow. One herb species is a typical member of the MG4 floodplain meadow community, while the other two species, sedge and rush, represent wetland vegetation. We also monitored vegetation biophysical and functional properties as well as soil nutrients and ground water levels. We found that the vegetation classes representing meadow sub-communities could not be spectrally distinguished from each other, whereas the individual herb species was found to have a distinctly different spectral signature from the sedge and rush species. The spectral differences between these three species could be explained by their observed differences in plant biophysical parameters, as corroborated through radiative transfer model simulations. These parameters, such as leaf area index, leaf dry matter content, leaf water content, and specific leaf area, along with other functional parameters, such as maximum carboxylation capacity and leaf nitrogen content, also helped explain the species’ differences in functional dynamics. Groundwater level and soil nitrogen availability, which are important factors governing plant nutrient status, were also found to be significantly different for the herb/wetland species’ locations. The study concludes that spectrally distinguishable species, typical for a highly biodiverse site such as a floodplain meadow, could potentially be used as target species to monitor vegetation dynamics under changing environmental conditions.
Resumo:
Statistical methods of inference typically require the likelihood function to be computable in a reasonable amount of time. The class of “likelihood-free” methods termed Approximate Bayesian Computation (ABC) is able to eliminate this requirement, replacing the evaluation of the likelihood with simulation from it. Likelihood-free methods have gained in efficiency and popularity in the past few years, following their integration with Markov Chain Monte Carlo (MCMC) and Sequential Monte Carlo (SMC) in order to better explore the parameter space. They have been applied primarily to estimating the parameters of a given model, but can also be used to compare models. Here we present novel likelihood-free approaches to model comparison, based upon the independent estimation of the evidence of each model under study. Key advantages of these approaches over previous techniques are that they allow the exploitation of MCMC or SMC algorithms for exploring the parameter space, and that they do not require a sampler able to mix between models. We validate the proposed methods using a simple exponential family problem before providing a realistic problem from human population genetics: the comparison of different demographic models based upon genetic data from the Y chromosome.
Resumo:
Abstract: Following a workshop exercise, two models, an individual-based landscape model (IBLM) and a non-spatial life-history model were used to assess the impact of a fictitious insecticide on populations of skylarks in the UK. The chosen population endpoints were abundance, population growth rate, and the chances of population persistence. Both models used the same life-history descriptors and toxicity profiles as the basis for their parameter inputs. The models differed in that exposure was a pre-determined parameter in the life-history model, but an emergent property of the IBLM, and the IBLM required a landscape structure as an input. The model outputs were qualitatively similar between the two models. Under conditions dominated by winter wheat, both models predicted a population decline that was worsened by the use of the insecticide. Under broader habitat conditions, population declines were only predicted for the scenarios where the insecticide was added. Inputs to the models are very different, with the IBLM requiring a large volume of data in order to achieve the flexibility of being able to integrate a range of environmental and behavioural factors. The life-history model has very few explicit data inputs, but some of these relied on extensive prior modelling needing additional data as described in Roelofs et al.(2005, this volume). Both models have strengths and weaknesses; hence the ideal approach is that of combining the use of both simple and comprehensive modeling tools.
Resumo:
We propose and analyze a simple mathematical model for susceptible prey (S)–infected prey (I)–predator (P) interaction, where the susceptible prey population (S) is infected directly from external sources as well as through contact with infected class (I) and the predator completely avoids consuming the infected prey. The model is analyzed to obtain different thresholds of the key parameters under which the system exhibits stability around the biologically feasible equilibria. Through numerical simulations we display the effects of external infection and the infection through contact on the system dynamics in the absence as well as in the presence of the predator. We compare the system dynamics when infection occurs only through contact, with that when it occurs through contact and external sources. Our analysis demonstrates that under a disease-selective predation, stability and oscillations of the system is determined by two key parameters: the external infection rate and the force of infection through contact. Due to the introduction of external infection, the predator and the prey population show limit-cycle oscillations over a range parametric values. We suggest that while predicting the dynamics of such an eco-epidemiological system, the modes of infection and the infection rates might be carefully investigated.