23 resultados para Generalised Additive Model
Resumo:
Commonly used repair rate models for repairable systems in the reliability literature are renewal processes, generalised renewal processes or non-homogeneous Poisson processes. In addition to these models, geometric processes (GP) are studied occasionally. The GP, however, can only model systems with monotonously changing (increasing, decreasing or constant) failure intensities. This paper deals with the reliability modelling of failure processes for repairable systems where the failure intensity shows a bathtub-type non-monotonic behaviour. A new stochastic process, i.e. an extended Poisson process, is introduced in this paper. Reliability indices are presented, and the parameters of the new process are estimated. Experimental results on a data set demonstrate the validity of the new process.
Resumo:
Acrylamide levels in cooked/processed food can be reduced by treatment with citric acid or glycine. In a potato model system cooked at 180 degrees C for 10-60 min, these treatments affected the volatile profiles. Strecker aldehydes and alkylpyrazines, key flavor compounds of cooked potato, were monitored. Citric acid limited the generation of volatiles, particularly the alkylpyrazines. Glycine increased the total volatile yield by promoting the formation of certain alkylpyrazines, namely, 2,3-dimethylpyrazine, trimethylpyrazine, 2-ethyl-3,5-dimethylpyrazine, tetramethylpyrazine, and 2,5-diethyl-3- methylpyrazine. However, the formation of other pyrazines and Strecker aldehydes was suppressed. It was proposed that the opposing effects of these treatments on total volatile yield may be used to best advantage by employing a combined treatment at lower concentrations, especially as both treatments were found to have an additive effect in reducing acrylamide. This would minimize the impact on flavor but still achieve the desired reduction in acrylamide levels.
Resumo:
A neural network enhanced self-tuning controller is presented, which combines the attributes of neural network mapping with a generalised minimum variance self-tuning control (STC) strategy. In this way the controller can deal with nonlinear plants, which exhibit features such as uncertainties, nonminimum phase behaviour, coupling effects and may have unmodelled dynamics, and whose nonlinearities are assumed to be globally bounded. The unknown nonlinear plants to be controlled are approximated by an equivalent model composed of a simple linear submodel plus a nonlinear submodel. A generalised recursive least squares algorithm is used to identify the linear submodel and a layered neural network is used to detect the unknown nonlinear submodel in which the weights are updated based on the error between the plant output and the output from the linear submodel. The procedure for controller design is based on the equivalent model therefore the nonlinear submodel is naturally accommodated within the control law. Two simulation studies are provided to demonstrate the effectiveness of the control algorithm.
Resumo:
This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bezier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bezier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bezier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bezier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.
Response of the middle atmosphere to CO2 doubling: results from the Canadian Middle Atmosphere Model
Resumo:
The Canadian Middle Atmosphere Model (CMAM) has been used to examine the middle atmosphere response to CO2 doubling. The radiative-photochemical response induced by doubling CO2 alone and the response produced by changes in prescribed SSTs are found to be approximately additive, with the former effect dominating throughout the middle atmosphere. The paper discusses the overall response, with emphasis on the effects of SST changes, which allow a tropospheric response to the CO2 forcing. The overall response is a cooling of the middle atmosphere accompanied by significant increases in the ozone and water vapor abundances. The ozone radiative feedback occurs through both an increase in solar heating and a decrease in infrared cooling, with the latter accounting for up to 15% of the total effect. Changes in global mean water vapor cooling are negligible above ~30 hPa. Near the polar summer mesopause, the temperature response is weak and not statistically significant. The main effects of SST changes are a warmer troposphere, a warmer and higher tropopause, cell-like structures of heating and cooling at low and middlelatitudes in the middle atmosphere, warming in the summer mesosphere, water vapor increase throughout the domain, and O3 decrease in the lower tropical stratosphere. No noticeable change in upwardpropagating planetary wave activity in the extratropical winter–spring stratosphere and no significant temperature response in the polar winter–spring stratosphere have been detected. Increased upwelling in the tropical stratosphere has been found to be linked to changed wave driving at low latitudes.
Resumo:
Neurovascular coupling in response to stimulation of the rat barrel cortex was investigated using concurrent multichannel electrophysiology and laser Doppler flowmetry. The data were used to build a linear dynamic model relating neural activity to blood flow. Local field potential time series were subject to current source density analysis, and the time series of a layer IV sink of the barrel cortex was used as the input to the model. The model output was the time series of the changes in regional cerebral blood flow (CBF). We show that this model can provide excellent fit of the CBF responses for stimulus durations of up to 16 s. The structure of the model consisted of two coupled components representing vascular dilation and constriction. The complex temporal characteristics of the CBF time series were reproduced by the relatively simple balance of these two components. We show that the impulse response obtained under the 16-s duration stimulation condition generalised to provide a good prediction to the data from the shorter duration stimulation conditions. Furthermore, by optimising three out of the total of nine model parameters, the variability in the data can be well accounted for over a wide range of stimulus conditions. By establishing linearity, classic system analysis methods can be used to generate and explore a range of equivalent model structures (e.g., feed-forward or feedback) to guide the experimental investigation of the control of vascular dilation and constriction following stimulation.
Resumo:
Bayesian analysis is given of an instrumental variable model that allows for heteroscedasticity in both the structural equation and the instrument equation. Specifically, the approach for dealing with heteroscedastic errors in Geweke (1993) is extended to the Bayesian instrumental variable estimator outlined in Rossi et al. (2005). Heteroscedasticity is treated by modelling the variance for each error using a hierarchical prior that is Gamma distributed. The computation is carried out by using a Markov chain Monte Carlo sampling algorithm with an augmented draw for the heteroscedastic case. An example using real data illustrates the approach and shows that ignoring heteroscedasticity in the instrument equation when it exists may lead to biased estimates.
Resumo:
Mechanistic catchment-scale phosphorus models appear to perform poorly where diffuse sources dominate. We investigate the reasons for this for one model, INCA-P, testing model output against 18 months of daily data in a small Scottish catchment. We examine key model processes and provide recommendations for model improvement and simplification. Improvements to the particulate phosphorus simulation are especially needed. The model evaluation procedure is then generalised to provide a checklist for identifying why model performance may be poor or unreliable, incorporating calibration, data, structural and conceptual challenges. There needs to be greater recognition that current models struggle to produce positive Nash–Sutcliffe statistics in agricultural catchments when evaluated against daily data. Phosphorus modelling is difficult, but models are not as useless as this might suggest. We found a combination of correlation coefficients, bias, a comparison of distributions and a visual assessment of time series a better means of identifying realistic simulations.