61 resultados para nonparametric rationality tests


Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the most vexing issues for analysts and managers of property companies across Europe has been the existence and persistence of deviations of Net Asset Values of property companies from their market capitalisation. The issue has clear links to similar discounts and premiums in closed-end funds. The closed end fund puzzle is regarded as an important unsolved problem in financial economics undermining theories of market efficiency and the Law of One Price. Consequently, it has generated a huge body of research. Although it can be tempting to focus on the particular inefficiencies of real estate markets in attempting to explain deviations from NAV, the closed end fund discount puzzle indicates that divergences between underlying asset values and market capitalisation are not a ‘pure’ real estate phenomenon. When examining potential explanations, two recurring factors stand out in the closed end fund literature as often undermining the economic rationale for a discount – the existence of premiums and cross-sectional and periodic fluctuations in the level of discount/premium. These need to be borne in mind when considering potential explanations for real estate markets. There are two approaches to investigating the discount to net asset value in closed-end funds: the ‘rational’ approach and the ‘noise trader’ or ‘sentiment’ approach. The ‘rational’ approach hypothesizes the discount to net asset value as being the result of company specific factors relating to such factors as management quality, tax liability and the type of stocks held by the fund. Despite the intuitive appeal of the ‘rational’ approach to closed-end fund discounts the studies have not successfully explained the variance in closed-end fund discounts or why the discount to net asset value in closed-end funds varies so much over time. The variation over time in the average sector discount is not only a feature of closed-end funds but also property companies. This paper analyses changes in the deviations from NAV for UK property companies between 2000 and 2003. The paper present a new way to study the phenomenon ‘cleaning’ the gearing effect by introducing a new way of calculating the discount itself. We call it “ungeared discount”. It is calculated by assuming that a firm issues new equity to repurchase outstanding debt without any variation on asset side. In this way discount does not depend on an accounting effect and the analysis should better explain the effect of other independent variables.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The applicability of AI methods to the Chagas' disease diagnosis is carried out by the use of Kohonen's self-organizing feature maps. Electrodiagnosis indicators calculated from ECG records are used as features in input vectors to train the network. Cross-validation results are used to modify the maps, providing an outstanding improvement to the interpretation of the resulting output. As a result, the map might be used to reduce the need for invasive explorations in chronic Chagas' disease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There have been various techniques published for optimizing the net present value of tenders by use of discounted cash flow theory and linear programming. These approaches to tendering appear to have been largely ignored by the industry. This paper utilises six case studies of tendering practice in order to establish the reasons for this apparent disregard. Tendering is demonstrated to be a market orientated function with many subjective judgements being made regarding a firm's environment. Detailed consideration of 'internal' factors such as cash flow are therefore judged to be unjustified. Systems theory is then drawn upon and applied to the separate processes of estimating and tendering. Estimating is seen as taking place in a relatively sheltered environment and as such operates as a relatively closed system. Tendering, however, takes place in a changing and dynamic environment and as such must operate as a relatively open system. The use of sophisticated methods to optimize the value of tenders is then identified as being dependent upon the assumption of rationality, which is justified in the case of a relatively closed system (i.e. estimating), but not for a relatively open system (i.e. tendering).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this contribution we aim at anchoring Agent-Based Modeling (ABM) simulations in actual models of human psychology. More specifically, we apply unidirectional ABM to social psychological models using low level agents (i.e., intra-individual) to examine whether they generate better predictions, in comparison to standard statistical approaches, concerning the intentions of performing a behavior and the behavior. Moreover, this contribution tests to what extent the predictive validity of models of attitude such as the Theory of Planned Behavior (TPB) or Model of Goal-directed Behavior (MGB) depends on the assumption that peoples’ decisions and actions are purely rational. Simulations were therefore run by considering different deviations from rationality of the agents with a trembling hand method. Two data sets concerning respectively the consumption of soft drinks and physical activity were used. Three key findings emerged from the simulations. First, compared to standard statistical approach the agent-based simulation generally improves the prediction of behavior from intention. Second, the improvement in prediction is inversely proportional to the complexity of the underlying theoretical model. Finally, the introduction of varying degrees of deviation from rationality in agents’ behavior can lead to an improvement in the goodness of fit of the simulations. By demonstrating the potential of ABM as a complementary perspective to evaluating social psychological models, this contribution underlines the necessity of better defining agents in terms of psychological processes before examining higher levels such as the interactions between individuals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Deception-detection is the crux of Turing’s experiment to examine machine thinking conveyed through a capacity to respond with sustained and satisfactory answers to unrestricted questions put by a human interrogator. However, in 60 years to the month since the publication of Computing Machinery and Intelligence little agreement exists for a canonical format for Turing’s textual game of imitation, deception and machine intelligence. This research raises from the trapped mine of philosophical claims, counter-claims and rebuttals Turing’s own distinct five minutes question-answer imitation game, which he envisioned practicalised in two different ways: a) A two-participant, interrogator-witness viva voce, b) A three-participant, comparison of a machine with a human both questioned simultaneously by a human interrogator. Using Loebner’s 18th Prize for Artificial Intelligence contest, and Colby et al.’s 1972 transcript analysis paradigm, this research practicalised Turing’s imitation game with over 400 human participants and 13 machines across three original experiments. Results show that, at the current state of technology, a deception rate of 8.33% was achieved by machines in 60 human-machine simultaneous comparison tests. Results also show more than 1 in 3 Reviewers succumbed to hidden interlocutor misidentification after reading transcripts from experiment 2. Deception-detection is essential to uncover the increasing number of malfeasant programmes, such as CyberLover, developed to steal identity and financially defraud users in chatrooms across the Internet. Practicalising Turing’s two tests can assist in understanding natural dialogue and mitigate the risk from cybercrime.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modelling spatial covariance is an essential part of all geostatistical methods. Traditionally, parametric semivariogram models are fit from available data. More recently, it has been suggested to use nonparametric correlograms obtained from spatially complete data fields. Here, both estimation techniques are compared. Nonparametric correlograms are shown to have a substantial negative bias. Nonetheless, when combined with the sample variance of the spatial field under consideration, they yield an estimate of the semivariogram that is unbiased for small lag distances. This justifies the use of this estimation technique in geostatistical applications. Various formulations of geostatistical combination (Kriging) methods are used here for the construction of hourly precipitation grids for Switzerland based on data from a sparse realtime network of raingauges and from a spatially complete radar composite. Two variants of Ordinary Kriging (OK) are used to interpolate the sparse gauge observations. In both OK variants, the radar data are only used to determine the semivariogram model. One variant relies on a traditional parametric semivariogram estimate, whereas the other variant uses the nonparametric correlogram. The variants are tested for three cases and the impact of the semivariogram model on the Kriging prediction is illustrated. For the three test cases, the method using nonparametric correlograms performs equally well or better than the traditional method, and at the same time offers great practical advantages. Furthermore, two variants of Kriging with external drift (KED) are tested, both of which use the radar data to estimate nonparametric correlograms, and as the external drift variable. The first KED variant has been used previously for geostatistical radar-raingauge merging in Catalonia (Spain). The second variant is newly proposed here and is an extension of the first. Both variants are evaluated for the three test cases as well as an extended evaluation period. It is found that both methods yield merged fields of better quality than the original radar field or fields obtained by OK of gauge data. The newly suggested KED formulation is shown to be beneficial, in particular in mountainous regions where the quality of the Swiss radar composite is comparatively low. An analysis of the Kriging variances shows that none of the methods tested here provides a satisfactory uncertainty estimate. A suitable variable transformation is expected to improve this.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Blood clotting response (BCR) resistance tests are available for a number of anticoagulant rodenticides. However, during the development of these tests many of the test parameters have been changed, making meaningful comparisons between results difficult. It was recognised that a standard methodology was urgently required for future BCR resistance tests and, accordingly, this document presents a reappraisal of published tests, and proposes a standard protocol for future use (see Appendix). The protocol can be used to provide information on the incidence and degree of resistance in a particular rodent population; to provide a simple comparison of resistance factors between active ingredients, thus giving clear information about cross-resistance for any given strain; and to provide comparisons of susceptibility or resistance between different populations. The methodology has a sound statistical basis in being based on the ED50 response, and requires many fewer animals than the resistance tests in current use. Most importantly, tests can be used to give a clear indication of the likely practical impact of the resistance on field efficacy. The present study was commissioned and funded by the Rodenticide Resistance Action Committee (RRAC) of CropLife International.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate for 26 OECD economies whether their current account imbalances to GDP are driven by stochastic trends. Regarding bounded stationarity as the more natural counterpart of sustainability, results from Phillips–Perron tests for unit root and bounded unit root processes are contrasted. While the former hint at stationarity of current account imbalances for 12 economies, the latter indicate bounded stationarity for only six economies. Through panel-based test statistics, current account imbalances are diagnosed as bounded non-stationary. Thus, (spurious) rejections of the unit root hypothesis might be due to the existence of bounds reflecting hidden policy controls or financial crises.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A series of imitation games involving 3-participant (simultaneous comparison of two hidden entities) and 2-participant (direct interrogation of a hidden entity) were conducted at Bletchley Park on the 100th anniversary of Alan Turing’s birth: 23 June 2012. From the ongoing analysis of over 150 games involving (expert and non-expert, males and females, adults and child) judges, machines and hidden humans (foils for the machines), we present six particular conversations that took place between human judges and a hidden entity that produced unexpected results. From this sample we focus on features of Turing’s machine intelligence test that the mathematician/code breaker did not consider in his examination for machine thinking: the subjective nature of attributing intelligence to another mind.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to test the hypothesis that investment decision making in the UK direct property market does not conform to the assumption of economic rationality underpinning portfolio theory. Design/methodology/approach – The developing behavioural real estate paradigm is used to challenge the idea that investor “man” is able to perform with economic rationality, specifically with reference to the analysis of the spatial dispersion of the entire UK “investible stock” and “investible locations” against observed spatial patterns of institutional investment. Location quotients are derived, combining different data sets. Findings – Considerably greater variation in institutional property holdings is found across the UK than would be expected given the economic and stock characteristics of local areas. This appears to provide evidence of irrationality (in the strict traditional economic sense) in the behaviour of institutional investors, with possible herding underpinning levels of investment that cannot be explained otherwise. Research limitations/implications – Over time a lack of distinction has developed between the cause and effect of comparatively low levels of development and institutional property investment across the regions. A critical examination of decision making and behaviour in practice could break this cycle, and could in turn promote regional economic growth. Originality/value – The entire “population” of observations is used to demonstrate the relationships between economic theory and investor performance exploring, for the first time, stock and local area characteristics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many applications, such as intermittent data assimilation, lead to a recursive application of Bayesian inference within a Monte Carlo context. Popular data assimilation algorithms include sequential Monte Carlo methods and ensemble Kalman filters (EnKFs). These methods differ in the way Bayesian inference is implemented. Sequential Monte Carlo methods rely on importance sampling combined with a resampling step, while EnKFs utilize a linear transformation of Monte Carlo samples based on the classic Kalman filter. While EnKFs have proven to be quite robust even for small ensemble sizes, they are not consistent since their derivation relies on a linear regression ansatz. In this paper, we propose another transform method, which does not rely on any a priori assumptions on the underlying prior and posterior distributions. The new method is based on solving an optimal transportation problem for discrete random variables. © 2013, Society for Industrial and Applied Mathematics

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider tests of forecast encompassing for probability forecasts, for both quadratic and logarithmic scoring rules. We propose test statistics for the null of forecast encompassing, present the limiting distributions of the test statistics, and investigate the impact of estimating the forecasting models' parameters on these distributions. The small-sample performance is investigated, in terms of small numbers of forecasts and model estimation sample sizes. We show the usefulness of the tests for the evaluation of recession probability forecasts from logit models with different leading indicators as explanatory variables, and for evaluating survey-based probability forecasts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tests, as learning events, are often more effective than are additional study opportunities, especially when recall is tested after a long retention interval. To what degree, though, do prior test or study events support subsequent study activities? We set out to test an implication of Bjork and Bjork’s (1992) new theory of disuse—that, under some circumstances, prior study may facilitate subsequent study more than does prior testing. Participants learned English–Swahili translations and then underwent a practice phase during which some items were tested (without feedback) and other items were restudied. Although tested items were better recalled after a 1-week delay than were restudied items, this benefit did not persist after participants had the opportunity to study the items again via feedback. In fact, after this additional study opportunity, items that had been restudied earlier were better recalled than were items that had been tested earlier. These results suggest that measuring the memorial consequences of testing requires more than a single test of retention and, theoretically, a consideration of the differing status of initially recallable and nonrecallable items.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we introduce a new testing procedure for evaluating the rationality of fixed-event forecasts based on a pseudo-maximum likelihood estimator. The procedure is designed to be robust to departures in the normality assumption. A model is introduced to show that such departures are likely when forecasters experience a credibility loss when they make large changes to their forecasts. The test is illustrated using monthly fixed-event forecasts produced by four UK institutions. Use of the robust test leads to the conclusion that certain forecasts are rational while use of the Gaussian-based test implies that certain forecasts are irrational. The difference in the results is due to the nature of the underlying data. Copyright © 2001 John Wiley & Sons, Ltd.