893 resultados para unknown-input estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new robust neurofuzzy model construction algorithm has been introduced for the modeling of a priori unknown dynamical systems from observed finite data sets in the form of a set of fuzzy rules. Based on a Takagi-Sugeno (T-S) inference mechanism a one to one mapping between a fuzzy rule base and a model matrix feature subspace is established. This link enables rule based knowledge to be extracted from matrix subspace to enhance model transparency. In order to achieve maximized model robustness and sparsity, a new robust extended Gram-Schmidt (G-S) method has been introduced via two effective and complementary approaches of regularization and D-optimality experimental design. Model rule bases are decomposed into orthogonal subspaces, so as to enhance model transparency with the capability of interpreting the derived rule base energy level. A locally regularized orthogonal least squares algorithm, combined with a D-optimality used for subspace based rule selection, has been extended for fuzzy rule regularization and subspace based information extraction. By using a weighting for the D-optimality cost function, the entire model construction procedure becomes automatic. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a controller design scheme for a priori unknown non-linear dynamical processes that are identified via an operating point neurofuzzy system from process data. Based on a neurofuzzy design and model construction algorithm (NeuDec) for a non-linear dynamical process, a neurofuzzy state-space model of controllable form is initially constructed. The control scheme based on closed-loop pole assignment is then utilized to ensure the time invariance and linearization of the state equations so that the system stability can be guaranteed under some mild assumptions, even in the presence of modelling error. The proposed approach requires a known state vector for the application of pole assignment state feedback. For this purpose, a generalized Kalman filtering algorithm with coloured noise is developed on the basis of the neurofuzzy state-space model to obtain an optimal state vector estimation. The derived controller is applied in typical output tracking problems by minimizing the tracking error. Simulation examples are included to demonstrate the operation and effectiveness of the new approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a world of almost permanent and rapidly increasing electronic data availability, techniques of filtering, compressing, and interpreting this data to transform it into valuable and easily comprehensible information is of utmost importance. One key topic in this area is the capability to deduce future system behavior from a given data input. This book brings together for the first time the complete theory of data-based neurofuzzy modelling and the linguistic attributes of fuzzy logic in a single cohesive mathematical framework. After introducing the basic theory of data-based modelling, new concepts including extended additive and multiplicative submodels are developed and their extensions to state estimation and data fusion are derived. All these algorithms are illustrated with benchmark and real-life examples to demonstrate their efficiency. Chris Harris and his group have carried out pioneering work which has tied together the fields of neural networks and linguistic rule-based algortihms. This book is aimed at researchers and scientists in time series modeling, empirical data modeling, knowledge discovery, data mining, and data fusion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel algorithm for solving nonlinear discrete time optimal control problems with model-reality differences is presented. The technique uses dynamic integrated system optimisation and parameter estimation (DISOPE) which achieves the correct optimal solution in spite of deficiencies in the mathematical model employed in the optimisation procedure. A new method for approximating some Jacobian trajectories required by the algorithm is introduced. It is shown that the iterative procedure associated with the algorithm naturally suits applications to batch chemical processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimating snow mass at continental scales is difficult, but important for understanding land-atmosphere interactions, biogeochemical cycles and the hydrology of the Northern latitudes. Remote sensing provides the only consistent global observations, butwith unknown errors. Wetest the theoretical performance of the Chang algorithm for estimating snow mass from passive microwave measurements using the Helsinki University of Technology (HUT) snow microwave emission model. The algorithm's dependence upon assumptions of fixed and uniform snow density and grainsize is determined, and measurements of these properties made at the Cold Land Processes Experiment (CLPX) Colorado field site in 2002–2003 used to quantify the retrieval errors caused by differences between the algorithm assumptions and measurements. Deviation from the Chang algorithm snow density and grainsize assumptions gives rise to an error of a factor of between two and three in calculating snow mass. The possibility that the algorithm performsmore accurately over large areas than at points is tested by simulating emission from a 25 km diameter area of snow with a distribution of properties derived from the snow pitmeasurements, using the Chang algorithm to calculate mean snow-mass from the simulated emission. The snowmass estimation froma site exhibiting the heterogeneity of the CLPX Colorado site proves onlymarginally different than that from a similarly-simulated homogeneous site. The estimation accuracy predictions are tested using the CLPX field measurements of snow mass, and simultaneous SSM/I and AMSR-E measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a current need to constrain the parameters of gravity wave drag (GWD) schemes in climate models using observational information instead of tuning them subjectively. In this work, an inverse technique is developed using data assimilation principles to estimate gravity wave parameters. Because mostGWDschemes assume instantaneous vertical propagation of gravity waves within a column, observations in a single column can be used to formulate a one-dimensional assimilation problem to estimate the unknown parameters. We define a cost function that measures the differences between the unresolved drag inferred from observations (referred to here as the ‘observed’ GWD) and the GWD calculated with a parametrisation scheme. The geometry of the cost function presents some difficulties, including multiple minima and ill-conditioning because of the non-independence of the gravity wave parameters. To overcome these difficulties we propose a genetic algorithm to minimize the cost function, which provides a robust parameter estimation over a broad range of prescribed ‘true’ parameters. When real experiments using an independent estimate of the ‘observed’ GWD are performed, physically unrealistic values of the parameters can result due to the non-independence of the parameters. However, by constraining one of the parameters to lie within a physically realistic range, this degeneracy is broken and the other parameters are also found to lie within physically realistic ranges. This argues for the essential physical self-consistency of the gravity wave scheme. A much better fit to the observed GWD at high latitudes is obtained when the parameters are allowed to vary with latitude. However, a close fit can be obtained either in the upper or the lower part of the profiles, but not in both at the same time. This result is a consequence of assuming an isotropic launch spectrum. The changes of sign in theGWDfound in the tropical lower stratosphere, which are associated with part of the quasi-biennial oscillation forcing, cannot be captured by the parametrisation with optimal parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a polynomial-based noise variance estimator for multiple-input multiple-output single-carrier block transmission (MIMO-SCBT) systems. It is shown that the optimal pilots for noise variance estimation satisfy the same condition as that for channel estimation. Theoretical analysis indicates that the proposed estimator is statistically more efficient than the conventional sum of squared residuals (SSR) based estimator. Furthermore, we obtain an efficient implementation of the estimator by exploiting its special structure. Numerical results confirm our theoretical analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that there is a dynamic relationship between cerebral blood flow (CBF) and cerebral blood volume (CBV). With increasing applications of functional MRI, where the blood oxygen-level-dependent signals are recorded, the understanding and accurate modeling of the hemodynamic relationship between CBF and CBV becomes increasingly important. This study presents an empirical and data-based modeling framework for model identification from CBF and CBV experimental data. It is shown that the relationship between the changes in CBF and CBV can be described using a parsimonious autoregressive with exogenous input model structure. It is observed that neither the ordinary least-squares (LS) method nor the classical total least-squares (TLS) method can produce accurate estimates from the original noisy CBF and CBV data. A regularized total least-squares (RTLS) method is thus introduced and extended to solve such an error-in-the-variables problem. Quantitative results show that the RTLS method works very well on the noisy CBF and CBV data. Finally, a combination of RTLS with a filtering method can lead to a parsimonious but very effective model that can characterize the relationship between the changes in CBF and CBV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sea surface temperature (SST) can be estimated from day and night observations of the Spinning Enhanced Visible and Infra-Red Imager (SEVIRI) by optimal estimation (OE). We show that exploiting the 8.7 μm channel, in addition to the “traditional” wavelengths of 10.8 and 12.0 μm, improves OE SST retrieval statistics in validation. However, the main benefit is an improvement in the sensitivity of the SST estimate to variability in true SST. In a fair, single-pixel comparison, the 3-channel OE gives better results than the SST estimation technique presently operational within the Ocean and Sea Ice Satellite Application Facility. This operational technique is to use SST retrieval coefficients, followed by a bias-correction step informed by radiative transfer simulation. However, the operational technique has an additional “atmospheric correction smoothing”, which improves its noise performance, and hitherto had no analogue within the OE framework. Here, we propose an analogue to atmospheric correction smoothing, based on the expectation that atmospheric total column water vapour has a longer spatial correlation length scale than SST features. The approach extends the observations input to the OE to include the averaged brightness temperatures (BTs) of nearby clear-sky pixels, in addition to the BTs of the pixel for which SST is being retrieved. The retrieved quantities are then the single-pixel SST and the clear-sky total column water vapour averaged over the vicinity of the pixel. This reduces the noise in the retrieved SST significantly. The robust standard deviation of the new OE SST compared to matched drifting buoys becomes 0.39 K for all data. The smoothed OE gives SST sensitivity of 98% on average. This means that diurnal temperature variability and ocean frontal gradients are more faithfully estimated, and that the influence of the prior SST used is minimal (2%). This benefit is not available using traditional atmospheric correction smoothing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Dietary assessment methods are important tools for nutrition research. Online dietary assessment tools have the potential to become invaluable methods of assessing dietary intake because, compared with traditional methods, they have many advantages including the automatic storage of input data and the immediate generation of nutritional outputs. Objective: The aim of this study was to develop an online food frequency questionnaire (FFQ) for dietary data collection in the “Food4Me” study and to compare this with the validated European Prospective Investigation of Cancer (EPIC) Norfolk printed FFQ. Methods: The Food4Me FFQ used in this analysis was developed to consist of 157 food items. Standardized color photographs were incorporated in the development of the Food4Me FFQ to facilitate accurate quantification of the portion size of each food item. Participants were recruited in two centers (Dublin, Ireland and Reading, United Kingdom) and each received the online Food4Me FFQ and the printed EPIC-Norfolk FFQ in random order. Participants completed the Food4Me FFQ online and, for most food items, participants were requested to choose their usual serving size among seven possibilities from a range of portion size pictures. The level of agreement between the two methods was evaluated for both nutrient and food group intakes using the Bland and Altman method and classification into quartiles of daily intake. Correlations were calculated for nutrient and food group intakes. Results: A total of 113 participants were recruited with a mean age of 30 (SD 10) years (40.7% male, 46/113; 59.3%, 67/113 female). Cross-classification into exact plus adjacent quartiles ranged from 77% to 97% at the nutrient level and 77% to 99% at the food group level. Agreement at the nutrient level was highest for alcohol (97%) and lowest for percent energy from polyunsaturated fatty acids (77%). Crude unadjusted correlations for nutrients ranged between .43 and .86. Agreement at the food group level was highest for “other fruits” (eg, apples, pears, oranges) and lowest for “cakes, pastries, and buns”. For food groups, correlations ranged between .41 and .90. Conclusions: The results demonstrate that the online Food4Me FFQ has good agreement with the validated printed EPIC-Norfolk FFQ for assessing both nutrient and food group intakes, rendering it a useful tool for ranking individuals based on nutrient and food group intakes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents results of the AQL2004 project, which has been develope within the GOFC-GOLD Latin American network of remote sensing and forest fires (RedLatif). The project intended to obtain monthly burned-land maps of the entire region, from Mexico to Patagonia, using MODIS (moderate-resolution imaging spectroradiometer) reflectance data. The project has been organized in three different phases: acquisition and preprocessing of satellite data; discrimination of burned pixels; and validation of results. In the first phase, input data consisting of 32-day composites of MODIS 500-m reflectance data generated by the Global Land Cover Facility (GLCF) of the University of Maryland (College Park, Maryland, U.S.A.) were collected and processed. The discrimination of burned areas was addressed in two steps: searching for "burned core" pixels using postfire spectral indices and multitemporal change detection and mapping of burned scars using contextual techniques. The validation phase was based on visual analysis of Landsat and CBERS (China-Brazil Earth Resources Satellite) images. Validation of the burned-land category showed an agreement ranging from 30% to 60%, depending on the ecosystem and vegetation species present. The total burned area for the entire year was estimated to be 153 215 km2. The most affected countries in relation to their territory were Cuba, Colombia, Bolivia, and Venezuela. Burned areas were found in most land covers; herbaceous vegetation (savannas and grasslands) presented the highest proportions of burned area, while perennial forest had the lowest proportions. The importance of croplands in the total burned area should be taken with reserve, since this cover presented the highest commission errors. The importance of generating systematic products of burned land areas for different ecological processes is emphasized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Partial budgeting was used to estimate the net benefit of blending Jersey milk in Holstein-Friesian milk for Cheddar cheese production. Jersey milk increases Cheddar cheese yield. However, the cost of Jersey milk is also higher; thus, determining the balance of profitability is necessary, including consideration of seasonal effects. Input variables were based on a pilot plant experiment run from 2012 to 2013 and industry milk and cheese prices during this period. When Jersey milk was used at an increasing rate with Holstein-Friesian milk (25, 50, 75, and 100% Jersey milk), it resulted in an increase of average net profit of 3.41, 6.44, 8.57, and 11.18 pence per kilogram of milk, respectively, and this additional profit was constant throughout the year. Sensitivity analysis showed that the most influential input on additional profit was cheese yield, whereas cheese price and milk price had a small effect. The minimum increase in yield, which was necessary for the use of Jersey milk to be profitable, was 2.63, 7.28, 9.95, and 12.37% at 25, 50, 75, and 100% Jersey milk, respectively. Including Jersey milk did not affect the quantity of whey butter and powder produced. Althoug further research is needed to ascertain the amount of additional profit that would be found on a commercial scale, the results indicate that using Jersey milk for Cheddar cheese making would lead to an improvement in profit for the cheese makers, especially at higher inclusion rates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional retinal projections target three functionally complementary systems it) the brain of mammals: the primary visual system, the visuomotor integration systems and the circadian timing system. In recent years, studies in several animals have been conducted to investigate the retinal projections to these three systems, despite some evidence of additional targets. The aim of this study was to disclose a previously unknown connection between the retina and the parabrachial complex of the common marmoset, by means of the intraocular injection of cholera toxin Subunit b. A few labeled retinal fibers/terminals that are detected in the medial parabrachial portion of the marmoset brain show clear varicosities, Suggesting terminal fields. Although the possible role of these projections remains unknown, they may provide a modulation of the cholinergic parabrachial neurons which project to the thalamic dorsal lateral geniculate nucleus. (c) 2008 Elsevier Ireland Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods: We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results: Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion: The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.