837 resultados para Out-of-sample


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: It has been argued that the alcohol industry uses corporate social responsibility activities to influence policy and undermine public health, and that every opportunity should be taken to scrutinise such activities. This study analyses a controversial Diageo-funded ‘responsible drinking’ campaign (“Stop out of Control Drinking”, or SOOCD) in Ireland. The study aims to identify how the campaign and its advisory board members frame and define (i) alcohol-related harms, and their causes, and (ii) possible solutions. Methods: Documentary analysis of SOOCD campaign material. This includes newspaper articles (n = 9), media interviews (n = 11), Facebook posts (n = 92), and Tweets (n = 340) produced by the campaign and by board members. All material was coded inductively, and a thematic analysis undertaken, with codes aggregated into sub-themes. Results: The SOOCD campaign utilises vague or self-defined concepts of ‘out of control’ and ‘moderate’ drinking, tending to present alcohol problems as behavioural rather than health issues. These are also unquantified with respect to actual drinking levels. It emphasises alcohol-related antisocial behaviour among young people, particularly young women. In discussing solutions to alcohol-related problems, it focuses on public opinion rather than on scientific evidence, and on educational approaches and information provision, misrepresenting these as effective. “Moderate drinking” is presented as a behavioural issue (“negative drinking behaviours”), rather than as a health issue. Conclusions: The ‘Stop Out of Control Drinking’ campaign frames alcohol problems and solutions in ways unfavourable to public health, and closely reflects other Diageo Corporate Social Responsibility (CSR) activity, as well as alcohol and tobacco industry strategies more generally. This framing, and in particular the framing of alcohol harms as a behavioural issue, with the implication that consumption should be guided only by self-defined limits, may not have been recognised by all board members. It suggests a need for awareness-raising efforts among the public, third sector and policymakers about alcohol industry strategies

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We seek to examine the efficacy and safety of prereperfusion emergency medical services (EMS)–administered intravenous metoprolol in anterior ST-segment elevation myocardial infarction patients undergoing eventual primary angioplasty. This is a prespecified subgroup analysis of the Effect of Metoprolol in Cardioprotection During an Acute Myocardial Infarction trial population, who all eventually received oral metoprolol within 12 to 24 hours. We studied patients receiving intravenous metoprolol by EMS and compared them with others treated by EMS but not receiving intravenous metoprolol. Outcomes included infarct size and left ventricular ejection fraction on cardiac magnetic resonance imaging at 1 week, and safety by measuring the incidence of the predefined combined endpoint (composite of death, malignant ventricular arrhythmias, advanced atrioventricular block, cardiogenic shock, or reinfarction) within the first 24 hours. From the total population of the trial (N=270), 147 patients (54%) were recruited during out-of-hospital assistance and transferred to the primary angioplasty center (74 intravenous metoprolol and 73 controls). Infarct size was smaller in patients receiving intravenous metoprolol compared with controls (23.4 [SD 15.0] versus 34.0 [SD 23.7] g; adjusted difference –11.4; 95% confidence interval [CI] –18.6 to –4.3). Left ventricular ejection fraction was higher in the intravenous metoprolol group (48.1% [SD 8.4%] versus 43.1% [SD 10.2%]; adjusted difference 5.0; 95% CI 1.6 to 8.4). Metoprolol administration did not increase the incidence of the prespecified safety combined endpoint: 6.8% versus 17.8% in controls (risk difference –11.1; 95% CI –21.5 to –0.6). Out-of-hospital administration of intravenous metoprolol by EMS within 4.5 hours of symptom onset in our subjects reduced infarct size and improved left ventricular ejection fraction with no excess of adverse events during the first 24 hours.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The term structure of interest rates is often summarized using a handful of yield factors that capture shifts in the shape of the yield curve. In this paper, we develop a comprehensive model for volatility dynamics in the level, slope, and curvature of the yield curve that simultaneously includes level and GARCH effects along with regime shifts. We show that the level of the short rate is useful in modeling the volatility of the three yield factors and that there are significant GARCH effects present even after including a level effect. Further, we find that allowing for regime shifts in the factor volatilities dramatically improves the model’s fit and strengthens the level effect. We also show that a regime-switching model with level and GARCH effects provides the best out-of-sample forecasting performance of yield volatility. We argue that the auxiliary models often used to estimate term structure models with simulation-based estimation techniques should be consistent with the main features of the yield curve that are identified by our model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Road safety targets are widely used and provide a basis for evaluating progress in road safety outcomes against a quantified goal. In Australia, a reduction in fatalities from road traffic crashes (RTCs) is a public policy objective: a national target of no more than 5.6 fatalities per 100,000 population by 2010 was set in 2001. The purpose of this paper is to examine the progress Australia and its states and territories have made in reducing RTC fatalities, and to estimate when the 2010 target may be reached by the jurisdictions. Methods Following a descriptive analysis, univariate time-series models estimate past trends in fatality rates over recent decades. Data for differing time periods are analysed and different trend specifications estimated. Preferred models were selected on the basis of statistical criteria and the period covered by the data. The results of preferred regressions are used to determine out-of-sample forecasts of when the national target may be attained by the jurisdictions. Though there are limitations with the time series approach used, inadequate data precluded the estimation of a full causal/structural model. Results Statistically significant reductions in fatality rates since 1971 were found for all jurisdictions with the national rate decreasing on average, 3% per year since 1992. However the gains have varied across time and space, with percent changes in fatality rates ranging from an 8% increase in New South Wales 1972-1981 to a 46% decrease in Queensland 1982-1991. Based on an estimate of past trends, it is possible that the target set for 2010 may not be reached nationally, until 2016. Unsurprisingly, the analysis indicated a range of outcomes for the respective state/territory jurisdictions though these results should be interpreted with caution due to different assumptions and length of data. Conclusions Results indicate that while Australia has been successful over recent decades in reducing RTC mortality, an important gap between aspirations and achievements remains. Moreover, unless there are fairly radical ("trend-breaking") changes in the factors that affect the incidence of RTC fatalities, deaths from RTCs are likely to remain above the national target in some areas of Australia, for years to come.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Between-subject and within-subject variability is ubiquitous in biology and physiology and understanding and dealing with this is one of the biggest challenges in medicine. At the same time it is difficult to investigate this variability by experiments alone. A recent modelling and simulation approach, known as population of models (POM), allows this exploration to take place by building a mathematical model consisting of multiple parameter sets calibrated against experimental data. However, finding such sets within a high-dimensional parameter space of complex electrophysiological models is computationally challenging. By placing the POM approach within a statistical framework, we develop a novel and efficient algorithm based on sequential Monte Carlo (SMC). We compare the SMC approach with Latin hypercube sampling (LHS), a method commonly adopted in the literature for obtaining the POM, in terms of efficiency and output variability in the presence of a drug block through an in-depth investigation via the Beeler-Reuter cardiac electrophysiological model. We show improved efficiency via SMC and that it produces similar responses to LHS when making out-of-sample predictions in the presence of a simulated drug block.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[ES] Los modelos implícitos constituyen uno de los enfoques de valoración de opciones alternativos al modelo de Black-Scholes que ha conocido un mayor desarrollo en los últimos años. Dentro de este planteamiento existen diferentes alternativas: los árboles implícitos, los modelos con función de volatilidad determinista y los modelos con función de volatilidad implícita. Todos ellos se construyen a partir de una estimación de la distribución de probabilidades riesgo-neutral del precio futuro del activo subyacente, congruente con los precios de mercado de las opciones negociadas. En consecuencia, los modelos implícitos proporcionan buenos resultados en la valoración de opciones dentro de la muestra. Sin embargo, su comportamiento como instrumento de predicción para opciones fuera de muestra no resulta satisfactorio. En este artículo se analiza la medida en la que este enfoque contribuye a la mejora de la valoración de opciones, tanto desde un punto de vista teórico como práctico.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the first part of the thesis we explore three fundamental questions that arise naturally when we conceive a machine learning scenario where the training and test distributions can differ. Contrary to conventional wisdom, we show that in fact mismatched training and test distribution can yield better out-of-sample performance. This optimal performance can be obtained by training with the dual distribution. This optimal training distribution depends on the test distribution set by the problem, but not on the target function that we want to learn. We show how to obtain this distribution in both discrete and continuous input spaces, as well as how to approximate it in a practical scenario. Benefits of using this distribution are exemplified in both synthetic and real data sets.

In order to apply the dual distribution in the supervised learning scenario where the training data set is fixed, it is necessary to use weights to make the sample appear as if it came from the dual distribution. We explore the negative effect that weighting a sample can have. The theoretical decomposition of the use of weights regarding its effect on the out-of-sample error is easy to understand but not actionable in practice, as the quantities involved cannot be computed. Hence, we propose the Targeted Weighting algorithm that determines if, for a given set of weights, the out-of-sample performance will improve or not in a practical setting. This is necessary as the setting assumes there are no labeled points distributed according to the test distribution, only unlabeled samples.

Finally, we propose a new class of matching algorithms that can be used to match the training set to a desired distribution, such as the dual distribution (or the test distribution). These algorithms can be applied to very large datasets, and we show how they lead to improved performance in a large real dataset such as the Netflix dataset. Their computational complexity is the main reason for their advantage over previous algorithms proposed in the covariate shift literature.

In the second part of the thesis we apply Machine Learning to the problem of behavior recognition. We develop a specific behavior classifier to study fly aggression, and we develop a system that allows analyzing behavior in videos of animals, with minimal supervision. The system, which we call CUBA (Caltech Unsupervised Behavior Analysis), allows detecting movemes, actions, and stories from time series describing the position of animals in videos. The method summarizes the data, as well as it provides biologists with a mathematical tool to test new hypotheses. Other benefits of CUBA include finding classifiers for specific behaviors without the need for annotation, as well as providing means to discriminate groups of animals, for example, according to their genetic line.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este artigo compara a habilidade preditiva foradaamostra de um modelo DSGE (DynamicStochastic General EquilibriumModel)Novo-Keynesiano, especificado e estimado para o Brasil, com a de um modelo Autorregressivo Vetorial (VAR) e com a de um modelo AutorregressivoVetorial Bayesiano (BVAR). O artigo inova em relação a outros trabalhos similares feitos para o Brasil (Castro et al. (2011) e Caetano e Moura (2013)), ao escolher uma especificação para o modelo DSGE que, ao permitir o uso de um conjunto de informação mais rico, tornou possível computar-se a habilidade preditiva do DSGE a partir de previsões que são,verdadeiramente,previsõesfora da amostra. Ademais, diferentemente de outros artigos que utilizaram dados brasileiros, avalia-se em que medida as respostas das variáveis aos choques na política monetária e no câmbio, obtidas pelo modelo DSGE, se assemelham àquelas de um BVAR estimado através de procedimentos bayesianos desenvolvidos de forma consistente. O modelo DSGE estimado é similar ao utilizado por Justiniano e Preston (2010) e Alpanda (2010). O modelo BVAR foi estimado utilizando uma metodologia semelhante à desenvolvida por Sims e Zha (1998), Waggoner e Zha (2003) e Ramírez, Waggoner e Zha (2007).Os resultados obtidos mostram que o modelo DSGE é capaz de gerar, para algumas variáveis, previsões competitivas em relação às dos outros modelos rivais VAR e BVAR. Além disso, as respostas das variáveis aos choques nas políticas monetária e cambial, nos modelos DSGE e BVAR, são bastante similares.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Semisupervised dimensionality reduction has been attracting much attention as it not only utilizes both labeled and unlabeled data simultaneously, but also works well in the situation of out-of-sample. This paper proposes an effective approach of semisupervised dimensionality reduction through label propagation and label regression. Different from previous efforts, the new approach propagates the label information from labeled to unlabeled data with a well-designed mechanism of random walks, in which outliers are effectively detected and the obtained virtual labels of unlabeled data can be well encoded in a weighted regression model. These virtual labels are thereafter regressed with a linear model to calculate the projection matrix for dimensionality reduction. By this means, when the manifold or the clustering assumption of data is satisfied, the labels of labeled data can be correctly propagated to the unlabeled data; and thus, the proposed approach utilizes the labeled and the unlabeled data more effectively than previous work. Experimental results are carried out upon several databases, and the advantage of the new approach is well demonstrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Orthogonal neighborhood-preserving projection (ONPP) is a recently developed orthogonal linear algorithm for overcoming the out-of-sample problem existing in the well-known manifold learning algorithm, i.e., locally linear embedding. It has been shown that ONPP is a strong analyzer of high-dimensional data. However, when applied to classification problems in a supervised setting, ONPP only focuses on the intraclass geometrical information while ignores the interaction of samples from different classes. To enhance the performance of ONPP in classification, a new algorithm termed discriminative ONPP (DONPP) is proposed in this paper. DONPP 1) takes into account both intraclass and interclass geometries; 2) considers the neighborhood information of interclass relationships; and 3) follows the orthogonality property of ONPP. Furthermore, DONPP is extended to the semisupervised case, i.e., semisupervised DONPP (SDONPP). This uses unlabeled samples to improve the classification accuracy of the original DONPP. Empirical studies demonstrate the effectiveness of both DONPP and SDONPP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

针对非线性系统传感器故障诊断难以解决的问题,提出了一种新的基于局部嵌入映射(LLE)的方法,解决了非线性数据的特征映射问题。首先,改进了基于分形维估计的内在维数的估计,通过线性拟合解决了线性区域的自动确定。然后,将故障状态与空间分布结合起来,通过确定数据点在空间超球内的分布完成故障的检测,在这个过程中将超球的确定与LLE算法中基于核函数的样本外数据扩展结合起来,大大减少了计算量,提高了算法的实时性,从而为复杂非线性传感器的故障诊断提供了一种新的有效的方法。