79 resultados para Explicit criteria
Resumo:
More than thirty years ago, Amari and colleagues proposed a statistical framework for identifying structurally stable macrostates of neural networks from observations of their microstates. We compare their stochastic stability criterion with a deterministic stability criterion based on the ergodic theory of dynamical systems, recently proposed for the scheme of contextual emergence and applied to particular inter-level relations in neuroscience. Stochastic and deterministic stability criteria for macrostates rely on macro-level contexts, which make them sensitive to differences between different macro-levels.
Resumo:
The main activity carried out by the geophysicist when interpreting seismic data, in terms of both importance and time spent is tracking (or picking) seismic events. in practice, this activity turns out to be rather challenging, particularly when the targeted event is interrupted by discontinuities such as geological faults or exhibits lateral changes in seismic character. In recent years, several automated schemes, known as auto-trackers, have been developed to assist the interpreter in this tedious and time-consuming task. The automatic tracking tool available in modem interpretation software packages often employs artificial neural networks (ANN's) to identify seismic picks belonging to target events through a pattern recognition process. The ability of ANNs to track horizons across discontinuities largely depends on how reliably data patterns characterise these horizons. While seismic attributes are commonly used to characterise amplitude peaks forming a seismic horizon, some researchers in the field claim that inherent seismic information is lost in the attribute extraction process and advocate instead the use of raw data (amplitude samples). This paper investigates the performance of ANNs using either characterisation methods, and demonstrates how the complementarity of both seismic attributes and raw data can be exploited in conjunction with other geological information in a fuzzy inference system (FIS) to achieve an enhanced auto-tracking performance.
Resumo:
A fundamental principle in practical nonlinear data modeling is the parsimonious principle of constructing the minimal model that explains the training data well. Leave-one-out (LOO) cross validation is often used to estimate generalization errors by choosing amongst different network architectures (M. Stone, "Cross validatory choice and assessment of statistical predictions", J. R. Stast. Soc., Ser. B, 36, pp. 117-147, 1974). Based upon the minimization of LOO criteria of either the mean squares of LOO errors or the LOO misclassification rate respectively, we present two backward elimination algorithms as model post-processing procedures for regression and classification problems. The proposed backward elimination procedures exploit an orthogonalization procedure to enable the orthogonality between the subspace as spanned by the pruned model and the deleted regressor. Subsequently, it is shown that the LOO criteria used in both algorithms can be calculated via some analytic recursive formula, as derived in this contribution, without actually splitting the estimation data set so as to reduce computational expense. Compared to most other model construction methods, the proposed algorithms are advantageous in several aspects; (i) There are no tuning parameters to be optimized through an extra validation data set; (ii) The procedure is fully automatic without an additional stopping criteria; and (iii) The model structure selection is directly based on model generalization performance. The illustrative examples on regression and classification are used to demonstrate that the proposed algorithms are viable post-processing methods to prune a model to gain extra sparsity and improved generalization.
Resumo:
An analysis of Stochastic Diffusion Search (SDS), a novel and efficient optimisation and search algorithm, is presented, resulting in a derivation of the minimum acceptable match resulting in a stable convergence within a noisy search space. The applicability of SDS can therefore be assessed for a given problem.
Resumo:
In molecular mechanics simulations of biological systems, the solvation water is typically represented by a default water model which is an integral part of the force field. Indeed, protein nonbonding parameters are chosen in order to obtain a balance between water-water and protein-water interactions and hence a reliable description of protein solvation. However, less attention has been paid to the question of whether the water model provides a reliable description of the water properties under the chosen simulation conditions, for which more accurate water models often exist. Here we consider the case of the CHARMM protein force field, which was parametrized for use with a modified TIP3P model. Using quantum mechanical and molecular mechanical calculations, we investigate whether the CHARMM force field can be used with other water models: TIP4P and TIP5P. Solvation properties of N-methylacetamide (NMA), other small solute molecules, and a small protein are examined. The results indicate differences in binding energies and minimum energy geometries, especially for TIP5P, but the overall description of solvation is found to be similar for all models tested. The results provide an indication that molecular mechanics simulations with the CHARMM force field can be performed with water models other than TIP3P, thus enabling an improved description of the solvent water properties.
Resumo:
Several previous studies have attempted to assess the sublimation depth-scales of ice particles from clouds into clear air. Upon examining the sublimation depth-scales in the Met Office Unified Model (MetUM), it was found that the MetUM has evaporation depth-scales 2–3 times larger than radar observations. Similar results can be seen in the European Centre for Medium-Range Weather Forecasts (ECMWF), Regional Atmospheric Climate Model (RACMO) and Météo-France models. In this study, we use radar simulation (converting model variables into radar observations) and one-dimensional explicit microphysics numerical modelling to test and diagnose the cause of the deep sublimation depth-scales in the forecast model. The MetUM data and parametrization scheme are used to predict terminal velocity, which can be compared with the observed Doppler velocity. This can then be used to test the hypothesis as to why the sublimation depth-scale is too large within the MetUM. Turbulence could lead to dry air entrainment and higher evaporation rates; particle density may be wrong, particle capacitance may be too high and lead to incorrect evaporation rates or the humidity within the sublimating layer may be incorrectly represented. We show that the most likely cause of deep sublimation zones is an incorrect representation of model humidity in the layer. This is tested further by using a one-dimensional explicit microphysics model, which tests the sensitivity of ice sublimation to key atmospheric variables and is capable of including sonde and radar measurements to simulate real cases. Results suggest that the MetUM grid resolution at ice cloud altitudes is not sufficient enough to maintain the sharp drop in humidity that is observed in the sublimation zone.
Resumo:
Wagner and Graf (2010) derive a population evolution equation for an ensemble of convective plumes, an analogue with the Lotka–Volterra equation, from the energy equations for convective plumes provided by Arakawa and Schubert (1974). Although their proposal is interesting, as the present note shows, there are some problems with their derivation.
Resumo:
We consider the finite sample properties of model selection by information criteria in conditionally heteroscedastic models. Recent theoretical results show that certain popular criteria are consistent in that they will select the true model asymptotically with probability 1. To examine the empirical relevance of this property, Monte Carlo simulations are conducted for a set of non–nested data generating processes (DGPs) with the set of candidate models consisting of all types of model used as DGPs. In addition, not only is the best model considered but also those with similar values of the information criterion, called close competitors, thus forming a portfolio of eligible models. To supplement the simulations, the criteria are applied to a set of economic and financial series. In the simulations, the criteria are largely ineffective at identifying the correct model, either as best or a close competitor, the parsimonious GARCH(1, 1) model being preferred for most DGPs. In contrast, asymmetric models are generally selected to represent actual data. This leads to the conjecture that the properties of parameterizations of processes commonly used to model heteroscedastic data are more similar than may be imagined and that more attention needs to be paid to the behaviour of the standardized disturbances of such models, both in simulation exercises and in empirical modelling.
Resumo:
The intention of this paper is to explore traditions and current trends in art with particular reference to the depiction of female experiences such as pregnancy, abortion, birth and motherhood. The inclusion and exclusion of such images in art history over time and across societies reflects prevailing attitudes, whilst affirming various stereotypical and gendered constructions developed and sustained within those societies. These constructions in turn relate to criteria defined by class, access to education and notions of femininity. Work by artists which feature aspects of these experiences (particularly childbirth), is considered taboo by many in a Western society which continues to render the essentially female experience as private, invisible and stigmitised and confuses the natural with the sexual. The work of undergraduate art students, inspired by the artwork of women artists who make explicit or are influenced by essentially female experiences, is discussed and attempts made to connect their work to the issues outlined.