469 resultados para Winner’s Curse
Resumo:
This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
A connection between a fuzzy neural network model with the mixture of experts network (MEN) modelling approach is established. Based on this linkage, two new neuro-fuzzy MEN construction algorithms are proposed to overcome the curse of dimensionality that is inherent in the majority of associative memory networks and/or other rule based systems. The first construction algorithm employs a function selection manager module in an MEN system. The second construction algorithm is based on a new parallel learning algorithm in which each model rule is trained independently, for which the parameter convergence property of the new learning method is established. As with the first approach, an expert selection criterion is utilised in this algorithm. These two construction methods are equivalent in their effectiveness in overcoming the curse of dimensionality by reducing the dimensionality of the regression vector, but the latter has the additional computational advantage of parallel processing. The proposed algorithms are analysed for effectiveness followed by numerical examples to illustrate their efficacy for some difficult data based modelling problems.
Resumo:
A common problem in many data based modelling algorithms such as associative memory networks is the problem of the curse of dimensionality. In this paper, a new two-stage neurofuzzy system design and construction algorithm (NeuDeC) for nonlinear dynamical processes is introduced to effectively tackle this problem. A new simple preprocessing method is initially derived and applied to reduce the rule base, followed by a fine model detection process based on the reduced rule set by using forward orthogonal least squares model structure detection. In both stages, new A-optimality experimental design-based criteria we used. In the preprocessing stage, a lower bound of the A-optimality design criterion is derived and applied as a subset selection metric, but in the later stage, the A-optimality design criterion is incorporated into a new composite cost function that minimises model prediction error as well as penalises the model parameter variance. The utilisation of NeuDeC leads to unbiased model parameters with low parameter variance and the additional benefit of a parsimonious model structure. Numerical examples are included to demonstrate the effectiveness of this new modelling approach for high dimensional inputs.
Resumo:
An input variable selection procedure is introduced for the identification and construction of multi-input multi-output (MIMO) neurofuzzy operating point dependent models. The algorithm is an extension of a forward modified Gram-Schmidt orthogonal least squares procedure for a linear model structure which is modified to accommodate nonlinear system modeling by incorporating piecewise locally linear model fitting. The proposed input nodes selection procedure effectively tackles the problem of the curse of dimensionality associated with lattice-based modeling algorithms such as radial basis function neurofuzzy networks, enabling the resulting neurofuzzy operating point dependent model to be widely applied in control and estimation. Some numerical examples are given to demonstrate the effectiveness of the proposed construction algorithm.
Resumo:
This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bezier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bezier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bezier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bezier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.
Resumo:
Associative memory networks such as Radial Basis Functions, Neurofuzzy and Fuzzy Logic used for modelling nonlinear processes suffer from the curse of dimensionality (COD), in that as the input dimension increases the parameterization, computation cost, training data requirements, etc. increase exponentially. Here a new algorithm is introduced for the construction of a Delaunay input space partitioned optimal piecewise locally linear models to overcome the COD as well as generate locally linear models directly amenable to linear control and estimation algorithms. The training of the model is configured as a new mixture of experts network with a new fast decision rule derived using convex set theory. A very fast simulated reannealing (VFSR) algorithm is utilized to search a global optimal solution of the Delaunay input space partition. A benchmark non-linear time series is used to demonstrate the new approach.
Resumo:
The question as to whether active management adds any value above that of the funds investment policy is one of continual interest to investors. In order to investigate this issue in the UK real estate market we examine a number of related questions. First, how much return variability is explained by investment policy? Second, how similar are the policies across funds? Third, how much of a fund’s return is determined by investment policy? Finally, how was this added value achieved? Using data for 19 real estate funds we find that investment policy explains less than half of the variability in returns over time, nothing of the variation across funds and that more than 100% of a level of return is attributed to investment policy. The results also show UK real estate fund focus exclusively on trying to pick winners to add value and that in pursuit of active return fund mangers incur high tracking error risk, consequently, successful active management is very difficult to achieve. In addition, the results are dependent on the benchmark used to represent the investment policy of the fund. Nonetheless, active management can indeed add value to a real estate funds performance. This is the good news. The bad news is adding value is much more difficult to achieve than is generally accepted.
Resumo:
Drawing on a unique database of office properties constructed for Gerald Eve by IPD, this paper examines the holding periods of individual office properties sold between 1983 and 2003. It quantifies the holding periods of sold properties and examines the relationship between the holding period and investment performance. Across the range of holding periods, excess returns (performance relative to the market) are evenly distributed. There are as many winners as there are losers. The distribution of excess returns over different holding periods is widely spread with the risk of under-performance greater over short holding periods. Over the longer term, excess performance is confined to a narrow range and individual returns are more likely to perform in line with the market as a whole.
Resumo:
This paper examines the short and long-term persistence of tax-exempt real estate funds in the UK through the use of winner-loser contingency table methodology. The persistence tests are applied to a database of varying numbers of funds from a low of 16 to a high of 27 using quarterly returns over the 12 years from 1990 Q1 to 2001 Q4. The overall conclusion is that the real estate funds in the UK show little evidence of persistence in the short-term (quarterly and semi-annual data) or for data over a considerable length of time (bi-annual to six yearly intervals). In contrast, the results are better for annual data with evidence of significant performance persistence. Thus at this stage, it seems that an annual evaluation period, provides the best discrimination of the winner and loser phenomenon in the real estate market. This result is different from equity and bond studies, where it seems that the repeat winner phenomenon is stronger over shorter periods of evaluation. These results require careful interpretation, however, as the results show that when only small samples are used significant adjustments must be made to correct for small sample bias and second the conclusions are sensitive to the length of the evaluation period and specific test used. Nonetheless, it seems that persistence in performance of real estate funds in the UK does exist, at least for the annual data, and it appears to be a guide to beating the pack in the long run. Furthermore, although the evidence of persistence in performance for the overall sample of funds is limited, we have found evidence that two funds were consistent winners over this period, whereas no one fund could be said to be a consistent loser.
Resumo:
Almost all research fields in geosciences use numerical models and observations and combine these using data-assimilation techniques. With ever-increasing resolution and complexity, the numerical models tend to be highly nonlinear and also observations become more complicated and their relation to the models more nonlinear. Standard data-assimilation techniques like (ensemble) Kalman filters and variational methods like 4D-Var rely on linearizations and are likely to fail in one way or another. Nonlinear data-assimilation techniques are available, but are only efficient for small-dimensional problems, hampered by the so-called ‘curse of dimensionality’. Here we present a fully nonlinear particle filter that can be applied to higher dimensional problems by exploiting the freedom of the proposal density inherent in particle filtering. The method is illustrated for the three-dimensional Lorenz model using three particles and the much more complex 40-dimensional Lorenz model using 20 particles. By also applying the method to the 1000-dimensional Lorenz model, again using only 20 particles, we demonstrate the strong scale-invariance of the method, leading to the optimistic conjecture that the method is applicable to realistic geophysical problems. Copyright c 2010 Royal Meteorological Society
Resumo:
New ways of combining observations with numerical models are discussed in which the size of the state space can be very large, and the model can be highly nonlinear. Also the observations of the system can be related to the model variables in highly nonlinear ways, making this data-assimilation (or inverse) problem highly nonlinear. First we discuss the connection between data assimilation and inverse problems, including regularization. We explore the choice of proposal density in a Particle Filter and show how the ’curse of dimensionality’ might be beaten. In the standard Particle Filter ensembles of model runs are propagated forward in time until observations are encountered, rendering it a pure Monte-Carlo method. In large-dimensional systems this is very inefficient and very large numbers of model runs are needed to solve the data-assimilation problem realistically. In our approach we steer all model runs towards the observations resulting in a much more efficient method. By further ’ensuring almost equal weight’ we avoid performing model runs that are useless in the end. Results are shown for the 40 and 1000 dimensional Lorenz 1995 model.
Resumo:
Following earlier work looking at overall career difficulties and low economic rewards faced by graduates in creative disciplines, the paper takes a closer look into the different career patterns and economic performance of “Bohemian” graduates across different creative disciplines. While it is widely acknowledged in the literature that careers in the creative field tend to be unstructured, often relying on part-time work and low wages, our knowledge of how these characteristics differ across the creative industries and occupational sectors is very limited. The paper explores the different trajectory and career patterns experienced by graduates in different creative disciplinary fields and their ability to enter creative occupations. Data from the Higher Education Statistical Agency (HESA) are presented, articulating a complex picture of the reality of finding a creative occupation for creative graduates. While students of some disciplines struggle to find full-time work in the creative economy, for others full-time occupation is the norm. Geography plays a crucial role also in offering graduates opportunities in creative occupations and higher salaries. The findings are contextualised in the New Labour cultural policy framework and conclusions are drawn on whether the creative industries policy construct has hidden a very problematic reality of winners and losers in the creative economy.
Resumo:
This study investigates whether commercial offices designed by signature architects in the United States achieve rental premiums compared to commercial offices designed by nonsignature architects. Focusing on buildings designed by winners of the Prizker Prize and the Gold Medal awarded by the American Institute of Architects, we create a sample of commercial office buildings designed by signature architects drawing on CoStar's national database. We use a combination of hedonic regression model and a logit model to estimate the various rent determinants. While the first stage measures the typical rental price differential above the typical building in a particular sub-market over a specific timeframe, the second stage identifies a potential price differential over a set of buildings closely matched on important characteristics (such as age, size, location etc.). We find that in both stages offices design by signature architects exhibit a premium. However these results are preliminary. The premium could be indeed an effect of the name of the architect, but others factors such as micro-market conditions might be the cause. Further tests are needed to confirm the validity of our results.