61 resultados para Process Models
Resumo:
The formulation of a new process-based crop model, the general large-area model (GLAM) for annual crops is presented. The model has been designed to operate on spatial scales commensurate with those of global and regional climate models. It aims to simulate the impact of climate on crop yield. Procedures for model parameter determination and optimisation are described, and demonstrated for the prediction of groundnut (i.e. peanut; Arachis hypogaea L.) yields across India for the period 1966-1989. Optimal parameters (e.g. extinction coefficient, transpiration efficiency, rate of change of harvest index) were stable over space and time, provided the estimate of the yield technology trend was based on the full 24-year period. The model has two location-specific parameters, the planting date, and the yield gap parameter. The latter varies spatially and is determined by calibration. The optimal value varies slightly when different input data are used. The model was tested using a historical data set on a 2.5degrees x 2.5degrees grid to simulate yields. Three sites are examined in detail-grid cells from Gujarat in the west, Andhra Pradesh towards the south, and Uttar Pradesh in the north. Agreement between observed and modelled yield was variable, with correlation coefficients of 0.74, 0.42 and 0, respectively. Skill was highest where the climate signal was greatest, and correlations were comparable to or greater than correlations with seasonal mean rainfall. Yields from all 35 cells were aggregated to simulate all-India yield. The correlation coefficient between observed and simulated yields was 0.76, and the root mean square error was 8.4% of the mean yield. The model can be easily extended to any annual crop for the investigation of the impacts of climate variability (or change) on crop yield over large areas. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Although the construction pollution index has been put forward and proved to be an efficient approach to reducing or mitigating pollution level during the construction planning stage, the problem of how to select the best construction plan based on distinguishing the degree of its potential adverse environmental impacts is still a research task. This paper first reviews environmental issues and their characteristics in construction, which are critical factors in evaluating potential adverse impacts of a construction plan. These environmental characteristics are then used to structure two decision models for environmental-conscious construction planning by using an analytic network process (ANP), including a complicated model and a simplified model. The two ANP models are combined and called the EnvironalPlanning system, which is applied to evaluate potential adverse environmental impacts of alternative construction plans.
Resumo:
The existing literature on lean construction is overwhelmingly prescriptive with little recognition of the social and politicised nature of the diffusion process. The prevailing production-engineering perspective too often assumes that organizations are unitary entities where all parties strive for the common goal of 'improved performance'. An alternative perspective is developed that considers the diffusion of lean construction across contested pluralistic arenas. Different actors mobilize different storylines to suit their own localized political agendas. Multiple storylines of lean construction continuously compete for attention with other management fashions. The conceptualization and enactment of lean construction therefore differs across contexts, often taking on different manifestations from those envisaged. However, such localized enactments of lean construction are patterned and conditioned by pre-existing social and economic structures over which individual managers have limited influence. Taking a broader view, 'leanness' can be conceptualized in terms of a quest for structural flexibility involving restructuring, downsizing and outsourcing. From this perspective, the UK construction industry can be seen to have embarked upon leaner ways of working in the mid-1970s, long before the terminology of lean thinking came into vogue. Semi-structured interviews with construction sector policy-makers provide empirical support for the view that lean construction is a multifaceted concept that defies universal definition.
Resumo:
This paper presents a hybrid control strategy integrating dynamic neural networks and feedback linearization into a predictive control scheme. Feedback linearization is an important nonlinear control technique which transforms a nonlinear system into a linear system using nonlinear transformations and a model of the plant. In this work, empirical models based on dynamic neural networks have been employed. Dynamic neural networks are mathematical structures described by differential equations, which can be trained to approximate general nonlinear systems. A case study based on a mixing process is presented.
Nonlinear system identification using particle swarm optimisation tuned radial basis function models
Resumo:
A novel particle swarm optimisation (PSO) tuned radial basis function (RBF) network model is proposed for identification of non-linear systems. At each stage of orthogonal forward regression (OFR) model construction process, PSO is adopted to tune one RBF unit's centre vector and diagonal covariance matrix by minimising the leave-one-out (LOO) mean square error (MSE). This PSO aided OFR automatically determines how many tunable RBF nodes are sufficient for modelling. Compared with the-state-of-the-art local regularisation assisted orthogonal least squares algorithm based on the LOO MSE criterion for constructing fixed-node RBF network models, the PSO tuned RBF model construction produces more parsimonious RBF models with better generalisation performance and is often more efficient in model construction. The effectiveness of the proposed PSO aided OFR algorithm for constructing tunable node RBF models is demonstrated using three real data sets.
An empirical study of process-related attributes in segmented software cost-estimation relationships
Resumo:
Parametric software effort estimation models consisting on a single mathematical relationship suffer from poor adjustment and predictive characteristics in cases in which the historical database considered contains data coming from projects of a heterogeneous nature. The segmentation of the input domain according to clusters obtained from the database of historical projects serves as a tool for more realistic models that use several local estimation relationships. Nonetheless, it may be hypothesized that using clustering algorithms without previous consideration of the influence of well-known project attributes misses the opportunity to obtain more realistic segments. In this paper, we describe the results of an empirical study using the ISBSG-8 database and the EM clustering algorithm that studies the influence of the consideration of two process-related attributes as drivers of the clustering process: the use of engineering methodologies and the use of CASE tools. The results provide evidence that such consideration conditions significantly the final model obtained, even though the resulting predictive quality is of a similar magnitude.
Resumo:
In industrial practice, constrained steady state optimisation and predictive control are separate, albeit closely related functions within the control hierarchy. This paper presents a method which integrates predictive control with on-line optimisation with economic objectives. A receding horizon optimal control problem is formulated using linear state space models. This optimal control problem is very similar to the one presented in many predictive control formulations, but the main difference is that it includes in its formulation a general steady state objective depending on the magnitudes of manipulated and measured output variables. This steady state objective may include the standard quadratic regulatory objective, together with economic objectives which are often linear. Assuming that the system settles to a steady state operating point under receding horizon control, conditions are given for the satisfaction of the necessary optimality conditions of the steady-state optimisation problem. The method is based on adaptive linear state space models, which are obtained by using on-line identification techniques. The use of model adaptation is justified from a theoretical standpoint and its beneficial effects are shown in simulations. The method is tested with simulations of an industrial distillation column and a system of chemical reactors.
Resumo:
We assessed the vulnerability of blanket peat to climate change in Great Britain using an ensemble of 8 bioclimatic envelope models. We used 4 published models that ranged from simple threshold models, based on total annual precipitation, to Generalised Linear Models (GLMs, based on mean annual temperature). In addition, 4 new models were developed which included measures of water deficit as threshold, classification tree, GLM and generalised additive models (GAM). Models that included measures of both hydrological conditions and maximum temperature provided a better fit to the mapped peat area than models based on hydrological variables alone. Under UKCIP02 projections for high (A1F1) and low (B1) greenhouse gas emission scenarios, 7 out of the 8 models showed a decline in the bioclimatic space associated with blanket peat. Eastern regions (Northumbria, North York Moors, Orkney) were shown to be more vulnerable than higher-altitude, western areas (Highlands, Western Isles and Argyle, Bute and The Trossachs). These results suggest a long-term decline in the distribution of actively growing blanket peat, especially under the high emissions scenario, although it is emphasised that existing peatlands may well persist for decades under a changing climate. Observational data from long-term monitoring and manipulation experiments in combination with process-based models are required to explore the nature and magnitude of climate change impacts on these vulnerable areas more fully.
Resumo:
A model structure comprising a wavelet network and a linear term is proposed for nonlinear system identification. It is shown that under certain conditions wavelets are orthogonal to linear functions and, as a result, the two parts of the model can be identified separately. The linear-wavelet model is compared to a standard wavelet network using data from a simulated fermentation process. The results show that the linear-wavelet model yields a smaller modelling error when compared to a wavelet network using the same number of regressors.
Resumo:
The present study investigates the initiation of precipitating deep convection in an ensemble of convection-resolving mesoscale models. Results of eight different model runs from five non-hydrostatic models are compared for a case of the Convective and Orographically-induced Precipitation Study (COPS). An isolated convective cell initiated east of the Black Forest crest in southwest Germany, although convective available potential energy was only moderate and convective inhibition was high. Measurements revealed that, due to the absence of synoptic forcing, convection was initiated by local processes related to the orography. In particular, the lifting by low-level convergence in the planetary boundary layer is assumed to be the dominant process on that day. The models used different configurations as well as different initial and boundary conditions. By comparing the different model performance with each other and with measurements, the processes which need to be well represented to initiate convection at the right place and time are discussed. Besides an accurate specification of the thermodynamic and kinematic fields, the results highlight the role of boundary-layer convergence features for quantitative precipitation forecasts in mountainous terrain.
Resumo:
This study analyzes the issue of American option valuation when the underlying exhibits a GARCH-type volatility process. We propose the usage of Rubinstein's Edgeworth binomial tree (EBT) in contrast to simulation-based methods being considered in previous studies. The EBT-based valuation approach makes an implied calibration of the pricing model feasible. By empirically analyzing the pricing performance of American index and equity options, we illustrate the superiority of the proposed approach.
Resumo:
Dielectric properties of 16 process cheeses were determined over the frequency range 0.3-3 GHz. The effect of temperature on the dielectric properties of process cheeses were investigated at temperature intervals of 10 degrees C between 5 and 85 degrees C. Results showed that the dielectric constant decreased gradually as frequency increased, for all cheeses. The dielectric loss factor (epsilon") decreased from above 125 to below 12 as frequency increased. epsilon' was highest at 5 degrees C and generally decreased up to a temperature between 55 and 75 degrees C. epsilon" generally increased with increasing temperature for high and medium moisture/fat ratio cheeses. epsilon" decreased with temperature between 5 and 55 degrees C and then increased, for low moisture/fat ratio cheese. Partial least square regression models indicated that epsilon' and epsilon" could be used as a quality control screening application to measure moisture content and inorganic salt content of process cheese, respectively. (c) 2005 Elsevier Ltd. All rights reserved..
Resumo:
The development of architecture and the settlement is central to discussions concerning the Neolithic transformation asthe very visible evidence for the changes in society that run parallel to the domestication of plants and animals. Architecture hasbeen used as an important aspect of models of how the transformation occurred, and as evidence for the sharp difference betweenhunter-gatherer and farming societies. We suggest that the emerging evidence for considerable architectural complexity from theearly Neolithic indicates that some of our interpretations depend too much on a very basic understanding of structures which arenormally seen as being primarily for residential purposes and containing households, which become the organising principle for thenew communities which are often seen as fully sedentary and described as villages. Recent work in southern Jordan suggests that inthis region at least there is little evidence for a standard house, and that structures are constructed for a range of diverse primary purposes other than simple domestic shelters.
Resumo:
Motivation: Modelling the 3D structures of proteins can often be enhanced if more than one fold template is used during the modelling process. However, in many cases, this may also result in poorer model quality for a given target or alignment method. There is a need for modelling protocols that can both consistently and significantly improve 3D models and provide an indication of when models might not benefit from the use of multiple target-template alignments. Here, we investigate the use of both global and local model quality prediction scores produced by ModFOLDclust2, to improve the selection of target-template alignments for the construction of multiple-template models. Additionally, we evaluate clustering the resulting population of multi- and single-template models for the improvement of our IntFOLD-TS tertiary structure prediction method. Results: We find that using accurate local model quality scores to guide alignment selection is the most consistent way to significantly improve models for each of the sequence to structure alignment methods tested. In addition, using accurate global model quality for re-ranking alignments, prior to selection, further improves the majority of multi-template modelling methods tested. Furthermore, subsequent clustering of the resulting population of multiple-template models significantly improves the quality of selected models compared with the previous version of our tertiary structure prediction method, IntFOLD-TS.
Resumo:
We consider the relation between so called continuous localization models—i.e. non-linear stochastic Schrödinger evolutions—and the discrete GRW-model of wave function collapse. The former can be understood as scaling limit of the GRW process. The proof relies on a stochastic Trotter formula, which is of interest in its own right. Our Trotter formula also allows to complement results on existence theory of stochastic Schrödinger evolutions by Holevo and Mora/Rebolledo.