13 resultados para Variances

em Aston University Research Archive


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This technical report contains all technical information and results from experiments where Mixture Density Networks (MDN) using an RBF network and fixed kernel means and variances were used to infer the wind direction from satellite data from the ersII weather satellite. The regularisation is based on the evidence framework and three different approximations were used to estimate the regularisation parameter. The results were compared with the results by `early stopping'.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

On-line learning is examined for the radial basis function network, an important and practical type of neural network. The evolution of generalization error is calculated within a framework which allows the phenomena of the learning process, such as the specialization of the hidden units, to be analyzed. The distinct stages of training are elucidated, and the role of the learning rate described. The three most important stages of training, the symmetric phase, the symmetry-breaking phase, and the convergence phase, are analyzed in detail; the convergence phase analysis allows derivation of maximal and optimal learning rates. As well as finding the evolution of the mean system parameters, the variances of these parameters are derived and shown to be typically small. Finally, the analytic results are strongly confirmed by simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During 1999 and 2000 a large number of articles appeared in the financial press which argued that the concentration of the FTSE 100 had increased. Many of these reports suggested that stock market volatility in the UK had risen, because the concentration of its stock markets had increased. This study undertakes a comprehensive measurement of stock market concentration using the FTSE 100 index. We find that during 1999, 2000 and 2001 stock market concentration was noticeably higher than at any other time since the index was introduced. When we measure the volatility of the FTSE 100 index we do not find an association between concentration and its volatility. When we examine the variances and covariance’s of the FTSE 100 constituents we find that security volatility appears to be positively related to concentration changes but concentration and the size of security covariances appear to be negatively related. We simulate the variance of four versions of the FTSE 100 index; in each version of the index the weighting structure reflects either an equally weighted index, or one with levels of low, intermediate or high concentration. We find that moving from low to high concentration has very little impact on the volatility of the index. To complete the study we estimate the minimum variance portfolio for the FTSE 100, we then compare concentration levels of this index to those formed on the basis of market weighting. We find that realised FTSE index weightings are higher than for the minimum variance index.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditionally, geostatistical algorithms are contained within specialist GIS and spatial statistics software. Such packages are often expensive, with relatively complex user interfaces and steep learning curves, and cannot be easily integrated into more complex process chains. In contrast, Service Oriented Architectures (SOAs) promote interoperability and loose coupling within distributed systems, typically using XML (eXtensible Markup Language) and Web services. Web services provide a mechanism for a user to discover and consume a particular process, often as part of a larger process chain, with minimal knowledge of how it works. Wrapping current geostatistical algorithms with a Web service layer would thus increase their accessibility, but raises several complex issues. This paper discusses a solution to providing interoperable, automatic geostatistical processing through the use of Web services, developed in the INTAMAP project (INTeroperability and Automated MAPping). The project builds upon Open Geospatial Consortium standards for describing observations, typically used within sensor webs, and employs Geography Markup Language (GML) to describe the spatial aspect of the problem domain. Thus the interpolation service is extremely flexible, being able to support a range of observation types, and can cope with issues such as change of support and differing error characteristics of sensors (by utilising descriptions of the observation process provided by SensorML). XML is accepted as the de facto standard for describing Web services, due to its expressive capabilities which allow automatic discovery and consumption by ‘naive’ users. Any XML schema employed must therefore be capable of describing every aspect of a service and its processes. However, no schema currently exists that can define the complex uncertainties and modelling choices that are often present within geostatistical analysis. We show a solution to this problem, developing a family of XML schemata to enable the description of a full range of uncertainty types. These types will range from simple statistics, such as the kriging mean and variances, through to a range of probability distributions and non-parametric models, such as realisations from a conditional simulation. By employing these schemata within a Web Processing Service (WPS) we show a prototype moving towards a truly interoperable geostatistical software architecture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates whether equity market volatility in one major market is related to volatility elsewhere. This paper models the daily conditional volatility of equity market wide returns as a GARCH-(1,1) process. Such a model will capture the changing nature of the conditional variance through time. It is found that the correlation between the conditional variances of major equity markets has increased substantially over the last two decades. This supports work which has been undertaken on conditional mean returns which indicates there has been an increase in equity market integration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I model the forward premium in the U.K. gilt-edged market over the period 1982–96 using a two-factor general equilibrium model of the term structure of interest rates. The model permits the decomposition of the forward premium into separate components representing interest rate expectations, the risk premia associated with each of the underlying factors, and terms capturing the direct impact of the variances of the factors on the shape of the forward curve.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a mean field theory of code-division multiple access (CDMA) systems with error-control coding. On the basis of the relation between the free energy and mutual information, we obtain an analytical expression of the maximum spectral efficiency of the coded CDMA system, from which a mean field description of the coded CDMA system is provided in terms of a bank of scalar Gaussian channels whose variances in general vary at different code symbol positions. Regular low-density parity-check (LDPC)-coded CDMA systems are also discussed as an example of the coded CDMA systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There may be circumstances where it is necessary for microbiologists to compare variances rather than means, e,g., in analysing data from experiments to determine whether a particular treatment alters the degree of variability or testing the assumption of homogeneity of variance prior to other statistical tests. All of the tests described in this Statnote have their limitations. Bartlett’s test may be too sensitive but Levene’s and the Brown-Forsythe tests also have problems. We would recommend the use of the variance-ratio test to compare two variances and the careful application of Bartlett’s test if there are more than two groups. Considering that these tests are not particularly robust, it should be remembered that the homogeneity of variance assumption is usually the least important of those considered when carrying out an ANOVA. If there is concern about this assumption and especially if the other assumptions of the analysis are also not likely to be met, e.g., lack of normality or non additivity of treatment effects then it may be better either to transform the data or to carry out a non-parametric test on the data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents research within empirical financial economics with focus on liquidity and portfolio optimisation in the stock market. The discussion on liquidity is focused on measurement issues, including TAQ data processing and measurement of systematic liquidity factors (FSO). Furthermore, a framework for treatment of the two topics in combination is provided. The liquidity part of the thesis gives a conceptual background to liquidity and discusses several different approaches to liquidity measurement. It contributes to liquidity measurement by providing detailed guidelines on the data processing needed for applying TAQ data to liquidity research. The main focus, however, is the derivation of systematic liquidity factors. The principal component approach to systematic liquidity measurement is refined by the introduction of moving and expanding estimation windows, allowing for time-varying liquidity co-variances between stocks. Under several liability specifications, this improves the ability to explain stock liquidity and returns, as compared to static window PCA and market average approximations of systematic liquidity. The highest ability to explain stock returns is obtained when using inventory cost as a liquidity measure and a moving window PCA as the systematic liquidity derivation technique. Systematic factors of this setting also have a strong ability in explaining a cross-sectional liquidity variation. Portfolio optimisation in the FSO framework is tested in two empirical studies. These contribute to the assessment of FSO by expanding the applicability to stock indexes and individual stocks, by considering a wide selection of utility function specifications, and by showing explicitly how the full-scale optimum can be identified using either grid search or the heuristic search algorithm of differential evolution. The studies show that relative to mean-variance portfolios, FSO performs well in these settings and that the computational expense can be mitigated dramatically by application of differential evolution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the statistics of a vector Manakov soliton in the presence of additive Gaussian white noise. The adiabatic perturbation theory for a Manakov soliton yields a stochastic Langevin system which we analyse via the corresponding Fokker-Planck equation for the probability density function (PDF) for the soliton parameters. We obtain marginal PDFs for the soliton frequency and amplitude as well as soliton amplitude and polarization angle. We also derive formulae for the variances of all soliton parameters and analyse their dependence on the initial values of polarization angle and phase. © 2006 IOP Publishing Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we re-examine the relationship between non-trading frequency and portfolio return autocorrelation. We show that in portfolios where security specific effects have not been completely diversified, portfolio autocorrelation will not increase monotonically with increasing non-trading, as indicated in Lo and MacKinlay (1990). We show that at high levels of non-trading, portfolio autocorrelation will become a decreasing function of non-trading probability and may take negative values. We find that heterogeneity among the means, variances and betas of the component securities in a portfolio can act to increase the induced autocorrelation, particularly in portfolios containing fewer stocks. Security specific effects remain even when the number of securities in the portfolio is far in excess of that considered necessary to diversify security risk. © 2014 Elsevier B.V.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Medication reconciliation is an important process in reducing medication errors in many countries. Canada, the USA, and UK have incorporated medication reconciliation as a priority area for national patient safety initiatives and goals. The UK national guidance excludes the pediatric population. The aim of this review was to explore the occurrence of medication discrepancies in the pediatric population. The primary objective was to identify studies reporting the rate and clinical significance of the discrepancies and the secondary objective was to ascertain whether any specific interventions have been used for medication reconciliation in pediatric settings. The following electronic bibliographic databases were used to identify studies: PubMed, OVID EMBASE (1980 to 2012 week 1), ISI Web of Science, ISI Biosis, Cumulative Index to Nursing and Allied Health Literature, and OVID International Pharmaceutical Abstracts (1970 to January 2012). Primary studies were identified that observed medication discrepancies in children under 18 years of age upon hospital admission, transfer and discharge, or had reported medication reconciliation interventions. Two independent reviewers screened titles and abstracts for relevant articles and extracted data using pre-defined data fields, including risk of bias assessment. Ten studies were identified with variances in reportage of stage and rate of discrepancies. Studies were heterogeneous in definitions, methods, and patient populations. Most studies related to admissions and reported consistently high rates of discrepancies ranging from 22 to 72.3 % of patients (sample size ranging from 23 to 272). Seven of the studies were low-quality observational studies and three studies were 'grey literature' non-peer reviewed conference abstracts. Studies involving small numbers of patients have shown that medication discrepancies occur at all transitions of care in children. Further research is required to investigate and demonstrate how implementing medication reconciliation can reduce discrepancies and potential patient harm. © 2013 Springer International Publishing Switzerland.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the application of a model, initially developed for determining the e-business requirements of a manufacturing organization, to assess the impact of management concerns on the functions generated. The model has been tested on 13 case studies in small, medium and large organizations. This research shows that the incorporation of concerns for generating the requirements for e-business functions improves the results, because they expose issues that are of relevance to the decision making process relating to e-business. Running the model with both and without concerns, and then presenting the reasons for major variances, can expose the issues and enable them to be studied in detail at the individual function/ reason level. © IFIP International Federation for Information Processing 2013.