206 resultados para Breakdown Probability
Resumo:
The small sample performance of Granger causality tests under different model dimensions, degree of cointegration, direction of causality, and system stability are presented. Two tests based on maximum likelihood estimation of error-correction models (LR and WALD) are compared to a Wald test based on multivariate least squares estimation of a modified VAR (MWALD). In large samples all test statistics perform well in terms of size and power. For smaller samples, the LR and WALD tests perform better than the MWALD test. Overall, the LR test outperforms the other two in terms of size and power in small samples.
Forecasting regional crop production using SOI phases: an example for the Australian peanut industry
Resumo:
Using peanuts as an example, a generic methodology is presented to forward-estimate regional crop production and associated climatic risks based on phases of the Southern Oscillation Index (SOI). Yield fluctuations caused by a highly variable rainfall environment are of concern to peanut processing and marketing bodies. The industry could profitably use forecasts of likely production to adjust their operations strategically. Significant, physically based lag-relationships exist between an index of ocean/atmosphere El Nino/Southern Oscillation phenomenon and future rainfall in Australia and elsewhere. Combining knowledge of SOI phases in November and December with output from a dynamic simulation model allows the derivation of yield probability distributions based on historic rainfall data. This information is available shortly after planting a crop and at least 3-5 months prior to harvest. The study shows that in years when the November-December SOI phase is positive there is an 80% chance of exceeding average district yields. Conversely, in years when the November-December SOI phase is either negative or rapidly falling there is only a 5% chance of exceeding average district yields, but a 95% chance of below average yields. This information allows the industry to adjust strategically for the expected volume of production. The study shows that simulation models can enhance SOI signals contained in rainfall distributions by discriminating between useful and damaging rainfall events. The methodology can be applied to other industries and regions.
Resumo:
In this paper we suggest a model of sequential auctions with endogenous participation where each bidder conjectures about the number of participants at each round. Then, after learning his value, each bidder decides whether or not to participate in the auction. In the calculation of his expected value, each bidder uses his conjectures about the number of participants for each possible subgroup. In equilibrium, the conjectured probability is compatible with the probability of staying in the auction. In our model, players face participation costs, bidders may buy as many objects as they wish and they are allowed to drop out at any round. Bidders can drop out at any time, but they cannot come back to the auction. In particular we can determine the number of participants and expected prices in equilibrium. We show that for any bidding strategy, there exists such a probability of staying in the auction. For the case of stochastically independent objects, we show that in equilibrium every bidder who decides to continue submits a bid that is equal to his value at each round. When objects are stochastically identical, we are able to show that expected prices are decreasing.
Resumo:
This article examines the efficiency of the National Football League (NFL) betting market. The standard ordinary least squares (OLS) regression methodology is replaced by a probit model. This circumvents potential econometric problems, and allows us to implement more sophisticated betting strategies where bets are placed only when there is a relatively high probability of success. In-sample tests indicate that probit-based betting strategies generate statistically significant profits. Whereas the profitability of a number of these betting strategies is confirmed by out-of-sample testing, there is some inconsistency among the remaining out-of-sample predictions. Our results also suggest that widely documented inefficiencies in this market tend to dissipate over time.
Resumo:
Analysis of a major multi-site epidemiologic study of heart disease has required estimation of the pairwise correlation of several measurements across sub-populations. Because the measurements from each sub-population were subject to sampling variability, the Pearson product moment estimator of these correlations produces biased estimates. This paper proposes a model that takes into account within and between sub-population variation, provides algorithms for obtaining maximum likelihood estimates of these correlations and discusses several approaches for obtaining interval estimates. (C) 1997 by John Wiley & Sons, Ltd.
Resumo:
The identification of genes responsible for the rare cases of familial leukemia may afford insight into the mechanism underlying the more common sporadic occurrences. Here we test a single family with 11 relevant meioses transmitting autosomal dominant acute myelogenous leukemia (AML) and myelodysplasia for linkage to three potential candidate loci. In a different family with inherited AML, linkage to chromosome 21q22.1-22.2 was recently reported; we exclude linkage to 21q22.1-22.2, demonstrating that familial AML is a heterogeneous disease. After reviewing familial leukemia and observing anticipation in the form of a declining age of onset with each generation, we had proposed 9p21-22 and 16q22 as additional candidate loci. Whereas linkage to 9p21-22 can be excluded, the finding of a maximum two-point LOD score of 2.82 with the microsatellite marker D16S522 at a recombination fraction theta = 0 provides evidence supporting linkage to 16q22. Haplotype analysis reveals a 23.5-cM (17.9-Mb) commonly inherited region among all affected family members extending from D16S451 to D1GS289, In order to extract maximum linkage information with missing individuals, incomplete informativeness with individual markers in this interval, and possible deviance from strict autosomal dominant inheritance, we performed nonparametric linkage analysis (NPL) and found a maximum NPL statistic corresponding to a P-value of .00098, close to the maximum conditional probability of linkage expected for a pedigree with this structure. Mutational analysis in this region specifically excludes expansion of the AT-rich minisatellite repeat FRA16B fragile site and the CAG trinucleotide repeat in the E2F-4 transcription factor. The ''repeat expansion detection'' method, capable of detecting dynamic mutation associated with anticipation, more generally excludes large CAG repeat expansion as a cause of leukemia in this family.
Resumo:
Empowering front-line staff to deal with service failures has been proposed as a method of recovering from service breakdown and ensuring greater customer satisfaction. However, no empirical study has investigated consumer responses to empowerment strategies. This research investigates the effect on customer satisfaction and service quality of two employee characteristics: the degree to which the employee is empowered (full, limited, and none), and the employee's communication style (accommodative - informal and personal, and underaccommodative-formal and impersonal). These employee characteristics are studied within the context of service failures. Subjects were shown videotaped service scenarios, and asked to complete satisfaction and service quality ratings. Results revealed that the fully empowered employee produced more customer satisfaction than the other conditions, but only when the service provider used an accommodating style of communication. Fully empowered and nonempowered employees were not judged differently when an underaccommodating style of communication was adopted. (C) 1997 John Wiley & Sons, Inc.
Resumo:
Soil structure is generally defined as the arrangement, orientation, and organization of the primary particles of sand, silt, and clay into compound aggregates, which exhibit properties that are unequal to the properties of a mass of nonaggregated material with a similar texture.6 Therefore the nature of soil structure is that it conveys specific properties to the soil and any alteration, i.e., breakdown or structural development, to the soil structural units will affect the physical properties of the soil. The aggregation and organization of the soil particles tend to form a hierarchical order4, 5 where the lower orders tend to have higher densities and greater internal strength than the higher orders. A schematic diagram of the hierarchical nature of soil structural elements in a clay soil is given in Fig. 1.4 Clay particles tend to form domains (packets of parallel clay sheets, generally consisting of 5-7 sheets), in turn several domains form clusters, followed by several orders of clusters, micro- and macroaggregates. The hierarchical nature implies that the destruction of a lower order will result in the destruction of all higher hierarchical orders. An example is the dispersion of sodic clay domains which results in the destruction of all higher orders, resulting in a dense soil with low hydraulic conductivity. Hence the clay domains are the fundamental building blocks of the soil and its integrity may determine the soil's physical properties and behavior.
Resumo:
HE PROBIT MODEL IS A POPULAR DEVICE for explaining binary choice decisions in econometrics. It has been used to describe choices such as labor force participation, travel mode, home ownership, and type of education. These and many more examples can be found in papers by Amemiya (1981) and Maddala (1983). Given the contribution of economics towards explaining such choices, and given the nature of data that are collected, prior information on the relationship between a choice probability and several explanatory variables frequently exists. Bayesian inference is a convenient vehicle for including such prior information. Given the increasing popularity of Bayesian inference it is useful to ask whether inferences from a probit model are sensitive to a choice between Bayesian and sampling theory techniques. Of interest is the sensitivity of inference on coefficients, probabilities, and elasticities. We consider these issues in a model designed to explain choice between fixed and variable interest rate mortgages. Two Bayesian priors are employed: a uniform prior on the coefficients, designed to be noninformative for the coefficients, and an inequality restricted prior on the signs of the coefficients. We often know, a priori, whether increasing the value of a particular explanatory variable will have a positive or negative effect on a choice probability. This knowledge can be captured by using a prior probability density function (pdf) that is truncated to be positive or negative. Thus, three sets of results are compared:those from maximum likelihood (ML) estimation, those from Bayesian estimation with an unrestricted uniform prior on the coefficients, and those from Bayesian estimation with a uniform prior truncated to accommodate inequality restrictions on the coefficients.
Resumo:
A 4-wheel is a simple graph on 5 vertices with 8 edges, formed by taking a 4-cycle and joining a fifth vertex (the centre of the 4-wheel) to each of the other four vertices. A lambda -fold 4-wheel system of order n is an edge-disjoint decomposition of the complete multigraph lambdaK(n) into 4-wheels. Here, with five isolated possible exceptions when lambda = 2, we give necessary and sufficient conditions for a lambda -fold 4-wheel system of order n to be transformed into a lambda -fold Ccyde system of order n by removing the centre vertex from each 4-wheel, and its four adjacent edges (retaining the 4-cycle wheel rim), and reassembling these edges adjacent to wheel centres into 4-cycles.
Resumo:
Interval-valued versions of the max-flow min-cut theorem and Karp-Edmonds algorithm are developed and provide robustness estimates for flows in networks in an imprecise or uncertain environment. These results are extended to networks with fuzzy capacities and flows. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
The anisotropic norm of a linear discrete-time-invariant system measures system output sensitivity to stationary Gaussian input disturbances of bounded mean anisotropy. Mean anisotropy characterizes the degree of predictability (or colouredness) and spatial non-roundness of the noise. The anisotropic norm falls between the H-2 and H-infinity norms and accommodates their loss of performance when the probability structure of input disturbances is not exactly known. This paper develops a method for numerical computation of the anisotropic norm which involves linked Riccati and Lyapunov equations and an associated special type equation.
Resumo:
We shall examine a model, first studied by Brockwell et al. [Adv Appl Probab 14 (1982) 709.], which can be used to describe the longterm behaviour of populations that are subject to catastrophic mortality or emigration events. Populations can suffer dramatic declines when disease, such as an introduced virus, affects the population, or when food shortages occur, due to overgrazing or fluctuations in rainfall. However, perhaps surprisingly, such populations can survive for long periods and, although they may eventually become extinct, they can exhibit an apparently stationary regime. It is useful to be able to model this behaviour. This is particularly true of the ecological examples that motivated the present study, since, in order to properly manage these populations, it is necessary to be able to predict persistence times and to estimate the conditional probability distribution of population size. We shall see that although our model predicts eventual extinction, the time till extinction can be long and the stationary exhibited by these populations over any reasonable time scale can be explained using a quasistationary distribution. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
A mixture model incorporating long-term survivors has been adopted in the field of biostatistics where some individuals may never experience the failure event under study. The surviving fractions may be considered as cured. In most applications, the survival times are assumed to be independent. However, when the survival data are obtained from a multi-centre clinical trial, it is conceived that the environ mental conditions and facilities shared within clinic affects the proportion cured as well as the failure risk for the uncured individuals. It necessitates a long-term survivor mixture model with random effects. In this paper, the long-term survivor mixture model is extended for the analysis of multivariate failure time data using the generalized linear mixed model (GLMM) approach. The proposed model is applied to analyse a numerical data set from a multi-centre clinical trial of carcinoma as an illustration. Some simulation experiments are performed to assess the applicability of the model based on the average biases of the estimates formed. Copyright (C) 2001 John Wiley & Sons, Ltd.