138 resultados para conditional random fields
Resumo:
The evaluation of investment fund performance has been one of the main developments of modern portfolio theory. Most studies employ the technique developed by Jensen (1968) that compares a particular fund's returns to a benchmark portfolio of equal risk. However, the standard measures of fund manager performance are known to suffer from a number of problems in practice. In particular previous studies implicitly assume that the risk level of the portfolio is stationary through the evaluation period. That is unconditional measures of performance do not account for the fact that risk and expected returns may vary with the state of the economy. Therefore many of the problems encountered in previous performance studies reflect the inability of traditional measures to handle the dynamic behaviour of returns. As a consequence Ferson and Schadt (1996) suggest an approach to performance evaluation called conditional performance evaluation which is designed to address this problem. This paper utilises such a conditional measure of performance on a sample of 27 UK property funds, over the period 1987-1998. The results of which suggest that once the time varying nature of the funds beta is corrected for, by the addition of the market indicators, the average fund performance show an improvement over that of the traditional methods of analysis.
Resumo:
This paper provides a new proof of a theorem of Chandler-Wilde, Chonchaiya, and Lindner that the spectra of a certain class of infinite, random, tridiagonal matrices contain the unit disc almost surely. It also obtains an analogous result for a more general class of random matrices whose spectra contain a hole around the origin. The presence of the hole forces substantial changes to the analysis.
Resumo:
Rats and mice have traditionally been considered one of the most important pests of sugarcane. However, "control" campaigns are rarely specific to the target species, and can have an effect on local wildlife, in particular non-pest rodent species. The objective of this study was to distinguish between rodent species that are pests and those that are not, and to identify patterns of food utilization by the rodents in the sugarcane crop complex. Within the crop complex, subsistence crops like maize, sorghum, rice, and bananas, which are grown alongside the sugarcane, are also subject to rodent damage. Six native rodent species were trapped in the Papaloapan River Basin of the State of Veracruz; the cotton rat (Sigmodon hispidus), the rice rat (Oryzomys couesi), the small rice rat (O. chapmani), the white footed mouse (Peromyscus leucopus), the golden mouse (Reithrodontomys sumichrasti), and the pigmy mouse (Baiomys musculus). In a stomach content analysis, the major food components for the cotton rat, the rice rat and the small rice rat were sugarcane (4.9 to 30.1 %), seed (2.7 to 22.9%), and vegetation (0.9 to 29.8%); while for the golden mouse and the pigmy mouse the stomach content was almost exclusively seed (98 to 100%). The authors consider the first three species to be pests of the sugarcane crop complex, while the last two species are not.
Resumo:
The problem of calculating the probability of error in a DS/SSMA system has been extensively studied for more than two decades. When random sequences are employed some conditioning must be done before the application of the central limit theorem is attempted, leading to a Gaussian distribution. The authors seek to characterise the multiple access interference as a random-walk with a random number of steps, for random and deterministic sequences. Using results from random-walk theory, they model the interference as a K-distributed random variable and use it to calculate the probability of error in the form of a series, for a DS/SSMA system with a coherent correlation receiver and BPSK modulation under Gaussian noise. The asymptotic properties of the proposed distribution agree with other analyses. This is, to the best of the authors' knowledge, the first attempt to propose a non-Gaussian distribution for the interference. The modelling can be extended to consider multipath fading and general modulation
Resumo:
In a recent paper, Mason et al. propose a reliability test of ensemble forecasts for a continuous, scalar verification. As noted in the paper, the test relies on a very specific interpretation of ensembles, namely, that the ensemble members represent quantiles of some underlying distribution. This quantile interpretation is not the only interpretation of ensembles, another popular one being the Monte Carlo interpretation. Mason et al. suggest estimating the quantiles in this situation; however, this approach is fundamentally flawed. Errors in the quantile estimates are not independent of the exceedance events, and consequently the conditional exceedance probabilities (CEP) curves are not constant, which is a fundamental assumption of the test. The test would reject reliable forecasts with probability much higher than the test size.
Resumo:
Norms are a set of rules that govern the behaviour of human agent, and how human agent behaves in response to the given certain conditions. This paper investigates the overlapping of information fields (set of shared norms) in the Context State Transition Model, and how these overlapping fields may affect the choices and actions of human agent. This paper also includes discussion on the implementation of new conflict resolution strategies based on the situation specification. The reasoning about conflicting norms in multiple information fields is discussed in detail.)
Resumo:
Ensemble learning techniques generate multiple classifiers, so called base classifiers, whose combined classification results are used in order to increase the overall classification accuracy. In most ensemble classifiers the base classifiers are based on the Top Down Induction of Decision Trees (TDIDT) approach. However, an alternative approach for the induction of rule based classifiers is the Prism family of algorithms. Prism algorithms produce modular classification rules that do not necessarily fit into a decision tree structure. Prism classification rulesets achieve a comparable and sometimes higher classification accuracy compared with decision tree classifiers, if the data is noisy and large. Yet Prism still suffers from overfitting on noisy and large datasets. In practice ensemble techniques tend to reduce the overfitting, however there exists no ensemble learner for modular classification rule inducers such as the Prism family of algorithms. This article describes the first development of an ensemble learner based on the Prism family of algorithms in order to enhance Prism’s classification accuracy by reducing overfitting.
Resumo:
Generally classifiers tend to overfit if there is noise in the training data or there are missing values. Ensemble learning methods are often used to improve a classifier's classification accuracy. Most ensemble learning approaches aim to improve the classification accuracy of decision trees. However, alternative classifiers to decision trees exist. The recently developed Random Prism ensemble learner for classification aims to improve an alternative classification rule induction approach, the Prism family of algorithms, which addresses some of the limitations of decision trees. However, Random Prism suffers like any ensemble learner from a high computational overhead due to replication of the data and the induction of multiple base classifiers. Hence even modest sized datasets may impose a computational challenge to ensemble learners such as Random Prism. Parallelism is often used to scale up algorithms to deal with large datasets. This paper investigates parallelisation for Random Prism, implements a prototype and evaluates it empirically using a Hadoop computing cluster.
Resumo:
In this paper I analyze the general equilibrium in a random Walrasian economy. Dependence among agents is introduced in the form of dependency neighborhoods. Under the uncertainty, an agent may fail to survive due to a meager endowment in a particular state (direct effect), as well as due to unfavorable equilibrium price system at which the value of the endowment falls short of the minimum needed for survival (indirect terms-of-trade effect). To illustrate the main result I compute the stochastic limit of equilibrium price and probability of survival of an agent in a large Cobb-Douglas economy.
Resumo:
BACKGROUND: Fibroblast growth factor 9 (FGF9) is secreted from bone marrow cells, which have been shown to improve systolic function after myocardial infarction (MI) in a clinical trial. FGF9 promotes cardiac vascularization during embryonic development but is only weakly expressed in the adult heart. METHODS AND RESULTS: We used a tetracycline-responsive binary transgene system based on the α-myosin heavy chain promoter to test whether conditional expression of FGF9 in the adult myocardium supports adaptation after MI. In sham-operated mice, transgenic FGF9 stimulated left ventricular hypertrophy with microvessel expansion and preserved systolic and diastolic function. After coronary artery ligation, transgenic FGF9 enhanced hypertrophy of the noninfarcted left ventricular myocardium with increased microvessel density, reduced interstitial fibrosis, attenuated fetal gene expression, and improved systolic function. Heart failure mortality after MI was markedly reduced by transgenic FGF9, whereas rupture rates were not affected. Adenoviral FGF9 gene transfer after MI similarly promoted left ventricular hypertrophy with improved systolic function and reduced heart failure mortality. Mechanistically, FGF9 stimulated proliferation and network formation of endothelial cells but induced no direct hypertrophic effects in neonatal or adult rat cardiomyocytes in vitro. FGF9-stimulated endothelial cell supernatants, however, induced cardiomyocyte hypertrophy via paracrine release of bone morphogenetic protein 6. In accord with this observation, expression of bone morphogenetic protein 6 and phosphorylation of its downstream targets SMAD1/5 were increased in the myocardium of FGF9 transgenic mice. CONCLUSIONS: Conditional expression of FGF9 promotes myocardial vascularization and hypertrophy with enhanced systolic function and reduced heart failure mortality after MI. These observations suggest a previously unrecognized therapeutic potential for FGF9 after MI.
Resumo:
In order to validate the reported precision of space‐based atmospheric composition measurements, validation studies often focus on measurements in the tropical stratosphere, where natural variability is weak. The scatter in tropical measurements can then be used as an upper limit on single‐profile measurement precision. Here we introduce a method of quantifying the scatter of tropical measurements which aims to minimize the effects of short‐term atmospheric variability while maintaining large enough sample sizes that the results can be taken as representative of the full data set. We apply this technique to measurements of O3, HNO3, CO, H2O, NO, NO2, N2O, CH4, CCl2F2, and CCl3F produced by the Atmospheric Chemistry Experiment–Fourier Transform Spectrometer (ACE‐FTS). Tropical scatter in the ACE‐FTS retrievals is found to be consistent with the reported random errors (RREs) for H2O and CO at altitudes above 20 km, validating the RREs for these measurements. Tropical scatter in measurements of NO, NO2, CCl2F2, and CCl3F is roughly consistent with the RREs as long as the effect of outliers in the data set is reduced through the use of robust statistics. The scatter in measurements of O3, HNO3, CH4, and N2O in the stratosphere, while larger than the RREs, is shown to be consistent with the variability simulated in the Canadian Middle Atmosphere Model. This result implies that, for these species, stratospheric measurement scatter is dominated by natural variability, not random error, which provides added confidence in the scientific value of single‐profile measurements.