952 resultados para Square Root Model
Resumo:
A tunable radial basis function (RBF) network model is proposed for nonlinear system identification using particle swarm optimisation (PSO). At each stage of orthogonal forward regression (OFR) model construction, PSO optimises one RBF unit's centre vector and diagonal covariance matrix by minimising the leave-one-out (LOO) mean square error (MSE). This PSO aided OFR automatically determines how many tunable RBF nodes are sufficient for modelling. Compared with the-state-of-the-art local regularisation assisted orthogonal least squares algorithm based on the LOO MSE criterion for constructing fixed-node RBF network models, the PSO tuned RBF model construction produces more parsimonious RBF models with better generalisation performance and is computationally more efficient.
Resumo:
A construction algorithm for multioutput radial basis function (RBF) network modelling is introduced by combining a locally regularised orthogonal least squares (LROLS) model selection with a D-optimality experimental design. The proposed algorithm aims to achieve maximised model robustness and sparsity via two effective and complementary approaches. The LROLS method alone is capable of producing a very parsimonious RBF network model with excellent generalisation performance. The D-optimality design criterion enhances the model efficiency and robustness. A further advantage of the combined approach is that the user only needs to specify a weighting for the D-optimality cost in the combined RBF model selecting criterion and the entire model construction procedure becomes automatic. The value of this weighting does not influence the model selection procedure critically and it can be chosen with ease from a wide range of values.
Resumo:
A large number of urban surface energy balance models now exist with different assumptions about the important features of the surface and exchange processes that need to be incorporated. To date, no com- parison of these models has been conducted; in contrast, models for natural surfaces have been compared extensively as part of the Project for Intercomparison of Land-surface Parameterization Schemes. Here, the methods and first results from an extensive international comparison of 33 models are presented. The aim of the comparison overall is to understand the complexity required to model energy and water exchanges in urban areas. The degree of complexity included in the models is outlined and impacts on model performance are discussed. During the comparison there have been significant developments in the models with resulting improvements in performance (root-mean-square error falling by up to two-thirds). Evaluation is based on a dataset containing net all-wave radiation, sensible heat, and latent heat flux observations for an industrial area in Vancouver, British Columbia, Canada. The aim of the comparison is twofold: to identify those modeling ap- proaches that minimize the errors in the simulated fluxes of the urban energy balance and to determine the degree of model complexity required for accurate simulations. There is evidence that some classes of models perform better for individual fluxes but no model performs best or worst for all fluxes. In general, the simpler models perform as well as the more complex models based on all statistical measures. Generally the schemes have best overall capability to model net all-wave radiation and least capability to model latent heat flux.
Resumo:
Bayesian Model Averaging (BMA) is used for testing for multiple break points in univariate series using conjugate normal-gamma priors. This approach can test for the number of structural breaks and produce posterior probabilities for a break at each point in time. Results are averaged over specifications including: stationary; stationary around trend and unit root models, each containing different types and number of breaks and different lag lengths. The procedures are used to test for structural breaks on 14 annual macroeconomic series and 11 natural resource price series. The results indicate that there are structural breaks in all of the natural resource series and most of the macroeconomic series. Many of the series had multiple breaks. Our findings regarding the existence of unit roots, having allowed for structural breaks in the data, are largely consistent with previous work.
Resumo:
A very efficient learning algorithm for model subset selection is introduced based on a new composite cost function that simultaneously optimizes the model approximation ability and model robustness and adequacy. The derived model parameters are estimated via forward orthogonal least squares, but the model subset selection cost function includes a D-optimality design criterion that maximizes the determinant of the design matrix of the subset to ensure the model robustness, adequacy, and parsimony of the final model. The proposed approach is based on the forward orthogonal least square (OLS) algorithm, such that new D-optimality-based cost function is constructed based on the orthogonalization process to gain computational advantages and hence to maintain the inherent advantage of computational efficiency associated with the conventional forward OLS approach. Illustrative examples are included to demonstrate the effectiveness of the new approach.
Resumo:
This study investigated the potential application of mid-infrared spectroscopy (MIR 4,000–900 cm−1) for the determination of milk coagulation properties (MCP), titratable acidity (TA), and pH in Brown Swiss milk samples (n = 1,064). Because MCP directly influence the efficiency of the cheese-making process, there is strong industrial interest in developing a rapid method for their assessment. Currently, the determination of MCP involves time-consuming laboratory-based measurements, and it is not feasible to carry out these measurements on the large numbers of milk samples associated with milk recording programs. Mid-infrared spectroscopy is an objective and nondestructive technique providing rapid real-time analysis of food compositional and quality parameters. Analysis of milk rennet coagulation time (RCT, min), curd firmness (a30, mm), TA (SH°/50 mL; SH° = Soxhlet-Henkel degree), and pH was carried out, and MIR data were recorded over the spectral range of 4,000 to 900 cm−1. Models were developed by partial least squares regression using untreated and pretreated spectra. The MCP, TA, and pH prediction models were improved by using the combined spectral ranges of 1,600 to 900 cm−1, 3,040 to 1,700 cm−1, and 4,000 to 3,470 cm−1. The root mean square errors of cross-validation for the developed models were 2.36 min (RCT, range 24.9 min), 6.86 mm (a30, range 58 mm), 0.25 SH°/50 mL (TA, range 3.58 SH°/50 mL), and 0.07 (pH, range 1.15). The most successfully predicted attributes were TA, RCT, and pH. The model for the prediction of TA provided approximate prediction (R2 = 0.66), whereas the predictive models developed for RCT and pH could discriminate between high and low values (R2 = 0.59 to 0.62). It was concluded that, although the models require further development to improve their accuracy before their application in industry, MIR spectroscopy has potential application for the assessment of RCT, TA, and pH during routine milk analysis in the dairy industry. The implementation of such models could be a means of improving MCP through phenotypic-based selection programs and to amend milk payment systems to incorporate MCP into their payment criteria.
Resumo:
The objective of this study was to investigate the potential application of mid-infrared spectroscopy for determination of selected sensory attributes in a range of experimentally manufactured processed cheese samples. This study also evaluates mid-infrared spectroscopy against other recently proposed techniques for predicting sensory texture attributes. Processed cheeses (n = 32) of varying compositions were manufactured on a pilot scale. After 2 and 4 wk of storage at 4 degrees C, mid-infrared spectra ( 640 to 4,000 cm(-1)) were recorded and samples were scored on a scale of 0 to 100 for 9 attributes using descriptive sensory analysis. Models were developed by partial least squares regression using raw and pretreated spectra. The mouth-coating and mass-forming models were improved by using a reduced spectral range ( 930 to 1,767 cm(-1)). The remaining attributes were most successfully modeled using a combined range ( 930 to 1,767 cm(-1) and 2,839 to 4,000 cm(-1)). The root mean square errors of cross-validation for the models were 7.4(firmness; range 65.3), 4.6 ( rubbery; range 41.7), 7.1 ( creamy; range 60.9), 5.1(chewy; range 43.3), 5.2(mouth-coating; range 37.4), 5.3 (fragmentable; range 51.0), 7.4 ( melting; range 69.3), and 3.1 (mass-forming; range 23.6). These models had a good practical utility. Model accuracy ranged from approximate quantitative predictions to excellent predictions ( range error ratio = 9.6). In general, the models compared favorably with previously reported instrumental texture models and near-infrared models, although the creamy, chewy, and melting models were slightly weaker than the previously reported near-infrared models. We concluded that mid-infrared spectroscopy could be successfully used for the nondestructive and objective assessment of processed cheese sensory quality..
Resumo:
The interactions between shear-free turbulence in two regions (denoted as + and − on either side of a nearly flat horizontal interface are shown here to be controlled by several mechanisms, which depend on the magnitudes of the ratios of the densities, ρ+/ρ−, and kinematic viscosities of the fluids, μ+/μ−, and the root mean square (r.m.s.) velocities of the turbulence, u0+/u0−, above and below the interface. This study focuses on gas–liquid interfaces so that ρ+/ρ− ≪ 1 and also on where turbulence is generated either above or below the interface so that u0+/u0− is either very large or very small. It is assumed that vertical buoyancy forces across the interface are much larger than internal forces so that the interface is nearly flat, and coupling between turbulence on either side of the interface is determined by viscous stresses. A formal linearized rapid-distortion analysis with viscous effects is developed by extending the previous study by Hunt & Graham (J. Fluid Mech., vol. 84, 1978, pp. 209–235) of shear-free turbulence near rigid plane boundaries. The physical processes accounted for in our model include both the blocking effect of the interface on normal components of the turbulence and the viscous coupling of the horizontal field across thin interfacial viscous boundary layers. The horizontal divergence in the perturbation velocity field in the viscous layer drives weak inviscid irrotational velocity fluctuations outside the viscous boundary layers in a mechanism analogous to Ekman pumping. The analysis shows the following. (i) The blocking effects are similar to those near rigid boundaries on each side of the interface, but through the action of the thin viscous layers above and below the interface, the horizontal and vertical velocity components differ from those near a rigid surface and are correlated or anti-correlated respectively. (ii) Because of the growth of the viscous layers on either side of the interface, the ratio uI/u0, where uI is the r.m.s. of the interfacial velocity fluctuations and u0 the r.m.s. of the homogeneous turbulence far from the interface, does not vary with time. If the turbulence is driven in the lower layer with ρ+/ρ− ≪ 1 and u0+/u0− ≪ 1, then uI/u0− ~ 1 when Re (=u0−L−/ν−) ≫ 1 and R = (ρ−/ρ+)(v−/v+)1/2 ≫ 1. If the turbulence is driven in the upper layer with ρ+/ρ− ≪ 1 and u0+/u0− ≫ 1, then uI/u0+ ~ 1/(1 + R). (iii) Nonlinear effects become significant over periods greater than Lagrangian time scales. When turbulence is generated in the lower layer, and the Reynolds number is high enough, motions in the upper viscous layer are turbulent. The horizontal vorticity tends to decrease, and the vertical vorticity of the eddies dominates their asymptotic structure. When turbulence is generated in the upper layer, and the Reynolds number is less than about 106–107, the fluctuations in the viscous layer do not become turbulent. Nonlinear processes at the interface increase the ratio uI/u0+ for sheared or shear-free turbulence in the gas above its linear value of uI/u0+ ~ 1/(1 + R) to (ρ+/ρ−)1/2 ~ 1/30 for air–water interfaces. This estimate agrees with the direct numerical simulation results from Lombardi, De Angelis & Bannerjee (Phys. Fluids, vol. 8, no. 6, 1996, pp. 1643–1665). Because the linear viscous–inertial coupling mechanism is still significant, the eddy motions on either side of the interface have a similar horizontal structure, although their vertical structure differs.
Resumo:
Models of root system growth emerged in the early 1970s, and were based on mathematical representations of root length distribution in soil. The last decade has seen the development of more complex architectural models and the use of computer-intensive approaches to study developmental and environmental processes in greater detail. There is a pressing need for predictive technologies that can integrate root system knowledge, scaling from molecular to ensembles of plants. This paper makes the case for more widespread use of simpler models of root systems based on continuous descriptions of their structure. A new theoretical framework is presented that describes the dynamics of root density distributions as a function of individual root developmental parameters such as rates of lateral root initiation, elongation, mortality, and gravitropsm. The simulations resulting from such equations can be performed most efficiently in discretized domains that deform as a result of growth, and that can be used to model the growth of many interacting root systems. The modelling principles described help to bridge the gap between continuum and architectural approaches, and enhance our understanding of the spatial development of root systems. Our simulations suggest that root systems develop in travelling wave patterns of meristems, revealing order in otherwise spatially complex and heterogeneous systems. Such knowledge should assist physiologists and geneticists to appreciate how meristem dynamics contribute to the pattern of growth and functioning of root systems in the field.
Resumo:
We describe a model-data fusion (MDF) inter-comparison project (REFLEX), which compared various algorithms for estimating carbon (C) model parameters consistent with both measured carbon fluxes and states and a simple C model. Participants were provided with the model and with both synthetic net ecosystem exchange (NEE) of CO2 and leaf area index (LAI) data, generated from the model with added noise, and observed NEE and LAI data from two eddy covariance sites. Participants endeavoured to estimate model parameters and states consistent with the model for all cases over the two years for which data were provided, and generate predictions for one additional year without observations. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. For the synthetic data case, parameter estimates compared well with the true values. The results of the analyses indicated that parameters linked directly to gross primary production (GPP) and ecosystem respiration, such as those related to foliage allocation and turnover, or temperature sensitivity of heterotrophic respiration, were best constrained and characterised. Poorly estimated parameters were those related to the allocation to and turnover of fine root/wood pools. Estimates of confidence intervals varied among algorithms, but several algorithms successfully located the true values of annual fluxes from synthetic experiments within relatively narrow 90% confidence intervals, achieving >80% success rate and mean NEE confidence intervals <110 gC m−2 year−1 for the synthetic case. Annual C flux estimates generated by participants generally agreed with gap-filling approaches using half-hourly data. The estimation of ecosystem respiration and GPP through MDF agreed well with outputs from partitioning studies using half-hourly data. Confidence limits on annual NEE increased by an average of 88% in the prediction year compared to the previous year, when data were available. Confidence intervals on annual NEE increased by 30% when observed data were used instead of synthetic data, reflecting and quantifying the addition of model error. Finally, our analyses indicated that incorporating additional constraints, using data on C pools (wood, soil and fine roots) would help to reduce uncertainties for model parameters poorly served by eddy covariance data.
Resumo:
Coupled chemistry‐climate model simulations covering the recent past and continuing throughout the 21st century have been completed with a range of different models. Common forcings are used for the halogen amounts and greenhouse gas concentrations, as expected under the Montreal Protocol (with amendments) and Intergovernmental Panel on Climate Change A1b Scenario. The simulations of the Antarctic ozone hole are compared using commonly used diagnostics: the minimum ozone, the maximum area of ozone below 220 DU, and the ozone mass deficit below 220 DU. Despite the fact that the processes responsible for ozone depletion are reasonably well understood, a wide range of results is obtained. Comparisons with observations indicate that one of the reasons for the model underprediction in ozone hole area is the tendency for models to underpredict, by up to 35%, the area of low temperatures responsible for polar stratospheric cloud formation. Models also typically have species gradients that are too weak at the edge of the polar vortex, suggesting that there is too much mixing of air across the vortex edge. Other models show a high bias in total column ozone which restricts the size of the ozone hole (defined by a 220 DU threshold). The results of those models which agree best with observations are examined in more detail. For several models the ozone hole does not disappear this century but a small ozone hole of up to three million square kilometers continues to occur in most springs even after 2070.
Resumo:
This work proposes a unified neurofuzzy modelling scheme. To begin with, the initial fuzzy base construction method is based on fuzzy clustering utilising a Gaussian mixture model (GMM) combined with the analysis of covariance (ANOVA) decomposition in order to obtain more compact univariate and bivariate membership functions over the subspaces of the input features. The mean and covariance of the Gaussian membership functions are found by the expectation maximisation (EM) algorithm with the merit of revealing the underlying density distribution of system inputs. The resultant set of membership functions forms the basis of the generalised fuzzy model (GFM) inference engine. The model structure and parameters of this neurofuzzy model are identified via the supervised subspace orthogonal least square (OLS) learning. Finally, instead of providing deterministic class label as model output by convention, a logistic regression model is applied to present the classifier’s output, in which the sigmoid type of logistic transfer function scales the outputs of the neurofuzzy model to the class probability. Experimental validation results are presented to demonstrate the effectiveness of the proposed neurofuzzy modelling scheme.
Resumo:
The Soil Moisture and Ocean Salinity (SMOS) satellite marks the commencement of dedicated global surface soil moisture missions, and the first mission to make passive microwave observations at L-band. On-orbit calibration is an essential part of the instrument calibration strategy, but on-board beam-filling targets are not practical for such large apertures. Therefore, areas to serve as vicarious calibration targets need to be identified. Such sites can only be identified through field experiments including both in situ and airborne measurements. For this purpose, two field experiments were performed in central Australia. Three areas are studied as follows: 1) Lake Eyre, a typically dry salt lake; 2) Wirrangula Hill, with sparse vegetation and a dense cover of surface rock; and 3) Simpson Desert, characterized by dry sand dunes. Of those sites, only Wirrangula Hill and the Simpson Desert are found to be potentially suitable targets, as they have a spatial variation in brightness temperatures of <4 K under normal conditions. However, some limitations are observed for the Simpson Desert, where a bias of 15 K in vertical and 20 K in horizontal polarization exists between model predictions and observations, suggesting a lack of understanding of the underlying physics in this environment. Subsequent comparison with model predictions indicates a SMOS bias of 5 K in vertical and 11 K in horizontal polarization, and an unbiased root mean square difference of 10 K in both polarizations for Wirrangula Hill. Most importantly, the SMOS observations show that the brightness temperature evolution is dominated by regular seasonal patterns and that precipitation events have only little impact.
Resumo:
Most of the operational Sea Surface Temperature (SST) products derived from satellite infrared radiometry use multi-spectral algorithms. They show, in general, reasonable performances with root mean square (RMS) residuals around 0.5 K when validated against buoy measurements, but have limitations, particularly a component of the retrieval error that relates to such algorithms' limited ability to cope with the full variability of atmospheric absorption and emission. We propose to use forecast atmospheric profiles and a radiative transfer model to simulate the algorithmic errors of multi-spectral algorithms. In the practical case of SST derived from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG), we demonstrate that simulated algorithmic errors do explain a significant component of the actual errors observed for the non linear (NL) split window algorithm in operational use at the Centre de Météorologie Spatiale (CMS). The simulated errors, used as correction terms, reduce significantly the regional biases of the NL algorithm as well as the standard deviation of the differences with drifting buoy measurements. The availability of atmospheric profiles associated with observed satellite-buoy differences allows us to analyze the origins of the main algorithmic errors observed in the SEVIRI field of view: a negative bias in the inter-tropical zone, and a mid-latitude positive bias. We demonstrate how these errors are explained by the sensitivity of observed brightness temperatures to the vertical distribution of water vapour, propagated through the SST retrieval algorithm.
Resumo:
Models for water transfer in the crop-soil system are key components of agro-hydrological models for irrigation, fertilizer and pesticide practices. Many of the hydrological models for water transfer in the crop-soil system are either too approximate due to oversimplified algorithms or employ complex numerical schemes. In this paper we developed a simple and sufficiently accurate algorithm which can be easily adopted in agro-hydrological models for the simulation of water dynamics. We used a dual crop coefficient approach proposed by the FAO for estimating potential evaporation and transpiration, and a dynamic model for calculating relative root length distribution on a daily basis. In a small time step of 0.001 d, we implemented algorithms separately for actual evaporation, root water uptake and soil water content redistribution by decoupling these processes. The Richards equation describing soil water movement was solved using an integration strategy over the soil layers instead of complex numerical schemes. This drastically simplified the procedures of modeling soil water and led to much shorter computer codes. The validity of the proposed model was tested against data from field experiments on two contrasting soils cropped with wheat. Good agreement was achieved between measurement and simulation of soil water content in various depths collected at intervals during crop growth. This indicates that the model is satisfactory in simulating water transfer in the crop-soil system, and therefore can reliably be adopted in agro-hydrological models. Finally we demonstrated how the developed model could be used to study the effect of changes in the environment such as lowering the groundwater table caused by the construction of a motorway on crop transpiration. (c) 2009 Elsevier B.V. All rights reserved.