951 resultados para vector auto-regressive model
Resumo:
Esta dissertação tem como objectivo principal procurar contribuir para a discussão em torno das valências das ferramentas da Qualidade aplicadas ao campo museal. O seu enfoque particular desenvolve-se ao nível dos serviços educativos, procurando avaliar os seus processos e resultados. Partindo da premissa de que os museus que aplicam os princípios da Qualidade nas suas práticas museais estão mais aptos a inspirarem e apoiarem as necessidades de aprendizagem dos seus utilizadores, esta dissertação defenderá as instituições museológicas enquanto organizações de conhecimento, sendo a aprendizagem o âmago da sua acção. A sua questão orientadora centra-se em torno da pertinência da aplicação da ferramenta de auto-avaliação Inspiring Learning for All em museus portugueses.
Resumo:
This paper reports the development of a highly parameterised 3-D model able to adopt the shapes of a wide variety of different classes of vehicles (cars, vans, buses, etc), and its subsequent specialisation to a generic car class which accounts for most commonly encountered types of car (includng saloon, hatchback and estate cars). An interactive tool has been developed to obtain sample data for vehicles from video images. A PCA description of the manually sampled data provides a deformable model in which a single instance is described as a 6 parameter vector. Both the pose and the structure of a car can be recovered by fitting the PCA model to an image. The recovered description is sufficiently accurate to discriminate between vehicle sub-classes.
Resumo:
The improved empirical understanding of silt facies in Holocene coastal sequences provided by such as diatom, foraminifera, ostracode and testate amoebae analysis, combined with insights from quantitative stratigraphic and hydraulic simulations, has led to an inclusive, integrated model for the palaeogeomorphology, stratigraphy, lithofacies and biofacies of northwest European Holocene coastal lowlands in relation to sea-level behaviour. The model covers two general circumstances and is empirically supported by a range of field studies in the Holocene deposits of a number of British estuaries, particularly, the Severn. Where deposition was continuous over periods of centuries to millennia, and sea level fluctuated about a rising trend, the succession consists of repeated cycles of silt and peat lithofacies and biofacies in which series of transgressive overlaps (submergence sequences) alternate with series of regressive overlaps (emergence sequences) in association with the waxing and waning of tidal creek networks. Environmental and sea-level change are closely coupled, and equilibrium and secular pattern is of the kind represented ideally by a closed limit cycle. In the second circumstance, characteristic of unstable wetland shores and generally affecting smaller areas, coastal erosion ensures that episodes of deposition in the high intertidal zone last no more than a few centuries. The typical response is a series of regressive overlaps (emergence sequence) in erosively based high mudflat and salt-marsh silts that record, commonly as annual banding, exceptionally high deposition rates and a state of strong disequilibrium. Environmental change, including creek development, and sea-level movement are uncoupled. Only if deposition proceeds for a sufficiently long period, so that marshes mature, are equilibrium and close coupling regained. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The extent to which the four-dimensional variational data assimilation (4DVAR) is able to use information about the time evolution of the atmosphere to infer the vertical spatial structure of baroclinic weather systems is investigated. The singular value decomposition (SVD) of the 4DVAR observability matrix is introduced as a novel technique to examine the spatial structure of analysis increments. Specific results are illustrated using 4DVAR analyses and SVD within an idealized 2D Eady model setting. Three different aspects are investigated. The first aspect considers correcting errors that result in normal-mode growth or decay. The results show that 4DVAR performs well at correcting growing errors but not decaying errors. Although it is possible for 4DVAR to correct decaying errors, the assimilation of observations can be detrimental to a forecast because 4DVAR is likely to add growing errors instead of correcting decaying errors. The second aspect shows that the singular values of the observability matrix are a useful tool to identify the optimal spatial and temporal locations for the observations. The results show that the ability to extract the time-evolution information can be maximized by placing the observations far apart in time. The third aspect considers correcting errors that result in nonmodal rapid growth. 4DVAR is able to use the model dynamics to infer some of the vertical structure. However, the specification of the case-dependent background error variances plays a crucial role.
Resumo:
Four-dimensional variational data assimilation (4D-Var) combines the information from a time sequence of observations with the model dynamics and a background state to produce an analysis. In this paper, a new mathematical insight into the behaviour of 4D-Var is gained from an extension of concepts that are used to assess the qualitative information content of observations in satellite retrievals. It is shown that the 4D-Var analysis increments can be written as a linear combination of the singular vectors of a matrix which is a function of both the observational and the forecast model systems. This formulation is used to consider the filtering and interpolating aspects of 4D-Var using idealized case-studies based on a simple model of baroclinic instability. The results of the 4D-Var case-studies exhibit the reconstruction of the state in unobserved regions as a consequence of the interpolation of observations through time. The results also exhibit the filtering of components with small spatial scales that correspond to noise, and the filtering of structures in unobserved regions. The singular vector perspective gives a very clear view of this filtering and interpolating by the 4D-Var algorithm and shows that the appropriate specification of the a priori statistics is vital to extract the largest possible amount of useful information from the observations. Copyright © 2005 Royal Meteorological Society
Resumo:
The ECMWF ensemble weather forecasts are generated by perturbing the initial conditions of the forecast using a subset of the singular vectors of the linearised propagator. Previous results show that when creating probabilistic forecasts from this ensemble better forecasts are obtained if the mean of the spread and the variability of the spread are calibrated separately. We show results from a simple linear model that suggest that this may be a generic property for all singular vector based ensemble forecasting systems based on only a subset of the full set of singular vectors.
Resumo:
Nonlinear adjustment toward long-run price equilibrium relationships in the sugar-ethanol-oil nexus in Brazil is examined. We develop generalized bivariate error correction models that allow for cointegration between sugar, ethanol, and oil prices, where dynamic adjustments are potentially nonlinear functions of the disequilibrium errors. A range of models are estimated using Bayesian Monte Carlo Markov Chain algorithms and compared using Bayesian model selection methods. The results suggest that the long-run drivers of Brazilian sugar prices are oil prices and that there are nonlinearities in the adjustment processes of sugar and ethanol prices to oil price but linear adjustment between ethanol and sugar prices.
Resumo:
A size-structured plant population model is developed to study the evolution of pathogen-induced leaf shedding under various environmental conditions. The evolutionary stable strategy (ESS) of the leaf shedding rate is determined for two scenarios: i) a constant leaf shedding strategy and ii) an infection load driven leaf shedding strategy. The model predicts that ESS leaf shedding rates increase with nutrient availability. No effect of plant density on the ESS leaf shedding rate is found even though disease severity increases with plant density. When auto-infection, that is increased infection due to spores produced on the plant itself, plays a key role in further disease increase on the plant, shedding leaves removes disease that would otherwise contribute to disease increase on the plant itself. Consequently leaf shedding responses to infections may evolve. When external infection, that is infection due to immigrant spores, is the key determinant, shedding a leaf does not reduce the force of infection on the leaf shedding plant. In this case leaf shedding will not evolve. Under a low external disease pressure adopting an infection driven leaf shedding strategy is more efficient than adopting a constant leaf shedding strategy, since a plant adopting an infection driven leaf shedding strategy does not shed any leaves in the absence of infection, even when leaf shedding rates are high. A plant adopting a constant leaf shedding rate sheds the same amount of leaves regardless of the presence of infection. Based on the results we develop two hypotheses that can be tested if the appropriate plant material is available.
A hierarchical Bayesian model for predicting the functional consequences of amino-acid polymorphisms
Resumo:
Genetic polymorphisms in deoxyribonucleic acid coding regions may have a phenotypic effect on the carrier, e.g. by influencing susceptibility to disease. Detection of deleterious mutations via association studies is hampered by the large number of candidate sites; therefore methods are needed to narrow down the search to the most promising sites. For this, a possible approach is to use structural and sequence-based information of the encoded protein to predict whether a mutation at a particular site is likely to disrupt the functionality of the protein itself. We propose a hierarchical Bayesian multivariate adaptive regression spline (BMARS) model for supervised learning in this context and assess its predictive performance by using data from mutagenesis experiments on lac repressor and lysozyme proteins. In these experiments, about 12 amino-acid substitutions were performed at each native amino-acid position and the effect on protein functionality was assessed. The training data thus consist of repeated observations at each position, which the hierarchical framework is needed to account for. The model is trained on the lac repressor data and tested on the lysozyme mutations and vice versa. In particular, we show that the hierarchical BMARS model, by allowing for the clustered nature of the data, yields lower out-of-sample misclassification rates compared with both a BMARS and a frequen-tist MARS model, a support vector machine classifier and an optimally pruned classification tree.
Resumo:
A size-structured plant population model is developed to study the evolution of pathogen-induced leaf shedding under various environmental conditions. The evolutionary stable strategy (ESS) of the leaf shedding rate is determined for two scenarios: i) a constant leaf shedding strategy and ii) an infection load driven leaf shedding strategy. The model predicts that ESS leaf shedding rates increase with nutrient availability. No effect of plant density on the ESS leaf shedding rate is found even though disease severity increases with plant density. When auto-infection, that is increased infection due to spores produced on the plant itself, plays a key role in further disease increase on the plant, shedding leaves removes disease that would otherwise contribute to disease increase on the plant itself. Consequently leaf shedding responses to infections may evolve. When external infection, that is infection due to immigrant spores, is the key determinant, shedding a leaf does not reduce the force of infection on the leaf shedding plant. In this case leaf shedding will not evolve. Under a low external disease pressure adopting an infection driven leaf shedding strategy is more efficient than adopting a constant leaf shedding strategy, since a plant adopting an infection driven leaf shedding strategy does not shed any leaves in the absence of infection, even when leaf shedding rates are high. A plant adopting a constant leaf shedding rate sheds the same amount of leaves regardless of the presence of infection. Based on the results we develop two hypotheses that can be tested if the appropriate plant material is available.
Resumo:
A tunable radial basis function (RBF) network model is proposed for nonlinear system identification using particle swarm optimisation (PSO). At each stage of orthogonal forward regression (OFR) model construction, PSO optimises one RBF unit's centre vector and diagonal covariance matrix by minimising the leave-one-out (LOO) mean square error (MSE). This PSO aided OFR automatically determines how many tunable RBF nodes are sufficient for modelling. Compared with the-state-of-the-art local regularisation assisted orthogonal least squares algorithm based on the LOO MSE criterion for constructing fixed-node RBF network models, the PSO tuned RBF model construction produces more parsimonious RBF models with better generalisation performance and is computationally more efficient.
Resumo:
Objective: This paper presents a detailed study of fractal-based methods for texture characterization of mammographic mass lesions and architectural distortion. The purpose of this study is to explore the use of fractal and lacunarity analysis for the characterization and classification of both tumor lesions and normal breast parenchyma in mammography. Materials and methods: We conducted comparative evaluations of five popular fractal dimension estimation methods for the characterization of the texture of mass lesions and architectural distortion. We applied the concept of lacunarity to the description of the spatial distribution of the pixel intensities in mammographic images. These methods were tested with a set of 57 breast masses and 60 normal breast parenchyma (dataset1), and with another set of 19 architectural distortions and 41 normal breast parenchyma (dataset2). Support vector machines (SVM) were used as a pattern classification method for tumor classification. Results: Experimental results showed that the fractal dimension of region of interest (ROIs) depicting mass lesions and architectural distortion was statistically significantly lower than that of normal breast parenchyma for all five methods. Receiver operating characteristic (ROC) analysis showed that fractional Brownian motion (FBM) method generated the highest area under ROC curve (A z = 0.839 for dataset1, 0.828 for dataset2, respectively) among five methods for both datasets. Lacunarity analysis showed that the ROIs depicting mass lesions and architectural distortion had higher lacunarities than those of ROIs depicting normal breast parenchyma. The combination of FBM fractal dimension and lacunarity yielded the highest A z value (0.903 and 0.875, respectively) than those based on single feature alone for both given datasets. The application of the SVM improved the performance of the fractal-based features in differentiating tumor lesions from normal breast parenchyma by generating higher A z value. Conclusion: FBM texture model is the most appropriate model for characterizing mammographic images due to self-affinity assumption of the method being a better approximation. Lacunarity is an effective counterpart measure of the fractal dimension in texture feature extraction in mammographic images. The classification results obtained in this work suggest that the SVM is an effective method with great potential for classification in mammographic image analysis.
Resumo:
An efficient model identification algorithm for a large class of linear-in-the-parameters models is introduced that simultaneously optimises the model approximation ability, sparsity and robustness. The derived model parameters in each forward regression step are initially estimated via the orthogonal least squares (OLS), followed by being tuned with a new gradient-descent learning algorithm based on the basis pursuit that minimises the l(1) norm of the parameter estimate vector. The model subset selection cost function includes a D-optimality design criterion that maximises the determinant of the design matrix of the subset to ensure model robustness and to enable the model selection procedure to automatically terminate at a sparse model. The proposed approach is based on the forward OLS algorithm using the modified Gram-Schmidt procedure. Both the parameter tuning procedure, based on basis pursuit, and the model selection criterion, based on the D-optimality that is effective in ensuring model robustness, are integrated with the forward regression. As a consequence the inherent computational efficiency associated with the conventional forward OLS approach is maintained in the proposed algorithm. Examples demonstrate the effectiveness of the new approach.
Resumo:
This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.