929 resultados para Multi-model inference
Resumo:
A new identification algorithm is introduced for the Hammerstein model consisting of a nonlinear static function followed by a linear dynamical model. The nonlinear static function is characterised by using the Bezier-Bernstein approximation. The identification method is based on a hybrid scheme including the applications of the inverse of de Casteljau's algorithm, the least squares algorithm and the Gauss-Newton algorithm subject to constraints. The related work and the extension of the proposed algorithm to multi-input multi-output systems are discussed. Numerical examples including systems with some hard nonlinearities are used to illustrate the efficacy of the proposed approach through comparisons with other approaches.
Resumo:
This paper describes a multi-agent architecture to support CSCW systems modelling. Since CSCW involves different organizations, it can be seen as a social model. From this point of view, we investigate the possibility of modelling CSCW by agent technology, and then based on organizational semiotics method a multi-agent architecture is proposed via using EDA agent model. We explain the components of this multi-agent architecture and design process. It is argued that this approach provides a new perspective for modelling CSCW systems.
Resumo:
Temperature results from multi-decadal simulations of coupled chemistry climate models for the recent past are analyzed using multi-linear regression including a trend, solar cycle, lower stratospheric tropical wind, and volcanic aerosol terms. The climatology of the models for recent years is in good agreement with observations for the troposphere but the model results diverge from each other and from observations in the stratosphere. Overall, the models agree better with observations than in previous assessments, primarily because of corrections in the observed temperatures. The annually averaged global and polar temperature trends simulated by the models are generally in agreement with revised satellite observations and radiosonde data over much of their altitude range. In the global average, the model trends underpredict the radiosonde data slightly at the top of the observed range. Over the Antarctic some models underpredict the temperature trend in the lower stratosphere, while others overpredict the trends
Resumo:
Inference on the basis of recognition alone is assumed to occur prior to accessing further information (Pachur & Hertwig, 2006). A counterintuitive result of this is the “less-is-more” effect: a drop in the accuracy with which choices are made as to which of two or more items scores highest on a given criterion as more items are learned (Frosch, Beaman & McCloy, 2007; Goldstein & Gigerenzer, 2002). In this paper, we show that less-is-more effects are not unique to recognition-based inference but can also be observed with a knowledge-based strategy provided two assumptions, limited information and differential access, are met. The LINDA model which embodies these assumptions is presented. Analysis of the less-is-more effects predicted by LINDA and by recognition-driven inference shows that these occur for similar reasons and casts doubt upon the “special” nature of recognition-based inference. Suggestions are made for empirical tests to compare knowledge-based and recognition-based less-is-more effects
Resumo:
The development of a set of multi-channel dichroics which includes a 6 channel dichroic operating over the wavelength region from 0.3 to 52µm is described. In order to achieve the optimum performance, the optical constants of PbTe, Ge and CdTe coatings in the strongly absorptive region have been determined by use of a new iterative method using normal incidence reflectance measurement of the multilayer together with initial values of energy gap Eg and infinite refractive index n for the semiconductor model. The design and manufacture of the dichroics is discussed and the final results are presented.
Resumo:
Neurofuzzy modelling systems combine fuzzy logic with quantitative artificial neural networks via a concept of fuzzification by using a fuzzy membership function usually based on B-splines and algebraic operators for inference, etc. The paper introduces a neurofuzzy model construction algorithm using Bezier-Bernstein polynomial functions as basis functions. The new network maintains most of the properties of the B-spline expansion based neurofuzzy system, such as the non-negativity of the basis functions, and unity of support but with the additional advantages of structural parsimony and Delaunay input space partitioning, avoiding the inherent computational problems of lattice networks. This new modelling network is based on the idea that an input vector can be mapped into barycentric co-ordinates with respect to a set of predetermined knots as vertices of a polygon (a set of tiled Delaunay triangles) over the input space. The network is expressed as the Bezier-Bernstein polynomial function of barycentric co-ordinates of the input vector. An inverse de Casteljau procedure using backpropagation is developed to obtain the input vector's barycentric co-ordinates that form the basis functions. Extension of the Bezier-Bernstein neurofuzzy algorithm to n-dimensional inputs is discussed followed by numerical examples to demonstrate the effectiveness of this new data based modelling approach.
Resumo:
In this paper, we propose a new on-line learning algorithm for the non-linear system identification: the swarm intelligence aided multi-innovation recursive least squares (SI-MRLS) algorithm. The SI-MRLS algorithm applies the particle swarm optimization (PSO) to construct a flexible radial basis function (RBF) model so that both the model structure and output weights can be adapted. By replacing an insignificant RBF node with a new one based on the increment of error variance criterion at every iteration, the model remains at a limited size. The multi-innovation RLS algorithm is used to update the RBF output weights which are known to have better accuracy than the classic RLS. The proposed method can produces a parsimonious model with good performance. Simulation result are also shown to verify the SI-MRLS algorithm.
Resumo:
Recent experimental evidence underlines the importance of reduced diffusivity in amorphous semi-solid or glassy atmospheric aerosols. This paper investigates the impact of diffusivity on the ageing of multi-component reactive organic particles representative of atmospheric cooking aerosols. We apply and extend the recently developed KM-SUB model in a study of a 12-component mixture containing oleic and palmitoleic acids. We demonstrate that changes in the diffusivity may explain the evolution of chemical loss rates in ageing semi-solid particles, and we resolve surface and bulk processes under transient reaction conditions considering diffusivities altered by oligomerisation. This new model treatment allows prediction of the ageing of mixed organic multi-component aerosols over atmospherically relevant time scales and conditions. We illustrate the impact of changing diffusivity on the chemical half-life of reactive components in semisolid particles, and we demonstrate how solidification and crust formation at the particle surface can affect the chemical transformation of organic aerosols.
Resumo:
Recent experimental evidence underlines the importance of reduced diffusivity in amorphous semi-solid or glassy atmospheric aerosols. This paper investigates the impact of diffusivity on the ageing of multi-component reactive organic particles approximating atmospheric cooking aerosols. We apply and extend the recently developed KMSUB model in a study of a 12-component mixture containing oleic and palmitoleic acids. We demonstrate that changes in the diffusivity may explain the evolution of chemical loss rates in ageing semi-solid particles, and we resolve surface and bulk processes under transient reaction conditions considering diffusivities altered by oligomerisation. This new model treatment allows prediction of the ageing of mixed organic multi-component aerosols over atmospherically relevant timescales and conditions. We illustrate the impact of changing diffusivity on the chemical half-life of reactive components in semi-solid particles, and we demonstrate how solidification and crust formation at the particle surface can affect the chemical transformation of organic aerosols.
Resumo:
Nitrogen adsorption on carbon nanotubes is wide- ly studied because nitrogen adsorption isotherm measurement is a standard method applied for porosity characterization. A further reason is that carbon nanotubes are potential adsorbents for separation of nitrogen from oxygen in air. The study presented here describes the results of GCMC simulations of nitrogen (three site model) adsorption on single and multi walled closed nanotubes. The results obtained are described by a new adsorption isotherm model proposed in this study. The model can be treated as the tube analogue of the GAB isotherm taking into account the lateral adsorbate-adsorbate interactions. We show that the model describes the simulated data satisfactorily. Next this new approach is applied for a description of experimental data measured on different commercially available (and characterized using HRTEM) carbon nanotubes. We show that generally a quite good fit is observed and therefore it is suggested that the observed mechanism of adsorption in the studied materials is mainly determined by adsorption on tubes separated at large distances, so the tubes behave almost independently.
Resumo:
High rates of nutrient loading from agricultural and urban development have resulted in surface water eutrophication and groundwater contamination in regions of Ontario. In Lake Simcoe (Ontario, Canada), anthropogenic nutrient contributions have contributed to increased algal growth, low hypolimnetic oxygen concentrations, and impaired fish reproduction. An ambitious programme has been initiated to reduce phosphorus loads to the lake, aiming to achieve at least a 40% reduction in phosphorus loads by 2045. Achievement of this target necessitates effective remediation strategies, which will rely upon an improved understanding of controls on nutrient export from tributaries of Lake Simcoe as well as improved understanding of the importance of phosphorus cycling within the lake. In this paper, we describe a new model structure for the integrated dynamic and process-based model INCA-P, which allows fully-distributed applications, suited to branched river networks. We demonstrate application of this model to the Black River, a tributary of Lake Simcoe, and use INCA-P to simulate the fluxes of P entering the lake system, apportion phosphorus among different sources in the catchment, and explore future scenarios of land-use change and nutrient management to identify high priority sites for implementation of watershed best management practises.
Resumo:
Motivation: Modelling the 3D structures of proteins can often be enhanced if more than one fold template is used during the modelling process. However, in many cases, this may also result in poorer model quality for a given target or alignment method. There is a need for modelling protocols that can both consistently and significantly improve 3D models and provide an indication of when models might not benefit from the use of multiple target-template alignments. Here, we investigate the use of both global and local model quality prediction scores produced by ModFOLDclust2, to improve the selection of target-template alignments for the construction of multiple-template models. Additionally, we evaluate clustering the resulting population of multi- and single-template models for the improvement of our IntFOLD-TS tertiary structure prediction method. Results: We find that using accurate local model quality scores to guide alignment selection is the most consistent way to significantly improve models for each of the sequence to structure alignment methods tested. In addition, using accurate global model quality for re-ranking alignments, prior to selection, further improves the majority of multi-template modelling methods tested. Furthermore, subsequent clustering of the resulting population of multiple-template models significantly improves the quality of selected models compared with the previous version of our tertiary structure prediction method, IntFOLD-TS.
Resumo:
The aim of this study was, within a sensitivity analysis framework, to determine if additional model complexity gives a better capability to model the hydrology and nitrogen dynamics of a small Mediterranean forested catchment or if the additional parameters cause over-fitting. Three nitrogen-models of varying hydrological complexity were considered. For each model, general sensitivity analysis (GSA) and Generalized Likelihood Uncertainty Estimation (GLUE) were applied, each based on 100,000 Monte Carlo simulations. The results highlighted the most complex structure as the most appropriate, providing the best representation of the non-linear patterns observed in the flow and streamwater nitrate concentrations between 1999 and 2002. Its 5% and 95% GLUE bounds, obtained considering a multi-objective approach, provide the narrowest band for streamwater nitrogen, which suggests increased model robustness, though all models exhibit periods of inconsistent good and poor fits between simulated outcomes and observed data. The results confirm the importance of the riparian zone in controlling the short-term (daily) streamwater nitrogen dynamics in this catchment but not the overall flux of nitrogen from the catchment. It was also shown that as the complexity of a hydrological model increases over-parameterisation occurs, but the converse is true for a water quality model where additional process representation leads to additional acceptable model simulations. Water quality data help constrain the hydrological representation in process-based models. Increased complexity was justifiable for modelling river-system hydrochemistry. Increased complexity was justifiable for modelling river-system hydrochemistry.
Resumo:
A detailed analysis is undertaken of the Atlantic-European climate using data from 500-year-long proxy-based climate reconstructions, a long climate simulation with perpetual 1990 forcing, as well as two global and one regional climate change scenarios. The observed and simulated interannual variability and teleconnectivity are compared and interpreted in order to improve the understanding of natural climate variability on interannual to decadal time scales for the late Holocene. The focus is set on the Atlantic-European and Alpine regions during the winter and summer seasons, using temperature, precipitation, and 500 hPa geopotential height fields. The climate reconstruction shows pronounced interdecadal variations that appear to “lock” the atmospheric circulation in quasi-steady long-term patterns over multi-decadal periods controlling at least part of the temperature and precipitation variability. Different circulation patterns are persistent over several decades for the period 1500 to 1900. The 500-year-long simulation with perpetual 1990 forcing shows some substantial differences, with a more unsteady teleconnectivity behaviour. Two global scenario simulations indicate a transition towards more stable teleconnectivity for the next 100 years. Time series of reconstructed and simulated temperature and precipitation over the Alpine region show comparatively small changes in interannual variability within the time frame considered, with the exception of the summer season, where a substantial increase in interannual variability is simulated by regional climate models.