952 resultados para Distributed Lag Non-linear Models
Resumo:
We consider the issue of assessing influence of observations in the class of Birnbaum-Saunders nonlinear regression models, which is useful in lifetime data analysis. Our results generalize those in Galea et al. [8] which are confined to Birnbaum-Saunders linear regression models. Some influence methods, such as the local influence, total local influence of an individual and generalized leverage are discussed. Additionally, the normal curvatures for studying local influence are derived under some perturbation schemes. We also give an application to a real fatigue data set.
Resumo:
In this paper we discuss bias-corrected estimators for the regression and the dispersion parameters in an extended class of dispersion models (Jorgensen, 1997b). This class extends the regular dispersion models by letting the dispersion parameter vary throughout the observations, and contains the dispersion models as particular case. General formulae for the O(n(-1)) bias are obtained explicitly in dispersion models with dispersion covariates, which generalize previous results obtained by Botter and Cordeiro (1998), Cordeiro and McCullagh (1991), Cordeiro and Vasconcellos (1999), and Paula (1992). The practical use of the formulae is that we can derive closed-form expressions for the O(n(-1)) biases of the maximum likelihood estimators of the regression and dispersion parameters when the information matrix has a closed-form. Various expressions for the O(n(-1)) biases are given for special models. The formulae have advantages for numerical purposes because they require only a supplementary weighted linear regression. We also compare these bias-corrected estimators with two different estimators which are also bias-free to order O(n(-1)) that are based on bootstrap methods. These estimators are compared by simulation. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
In this paper we obtain asymptotic expansions up to order n(-1/2) for the nonnull distribution functions of the likelihood ratio, Wald, score and gradient test statistics in exponential family nonlinear models (Cordeiro and Paula, 1989), under a sequence of Pitman alternatives. The asymptotic distributions of all four statistics are obtained for testing a subset of regression parameters and for testing the dispersion parameter, thus generalising the results given in Cordeiro et al. (1994) and Ferrari et al. (1997). We also present Monte Carlo simulations in order to compare the finite-sample performance of these tests. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Although cellulose acetates, CAs, are extensively employed there is scant information about the systematic dependence of their properties on their degree of substitution, DS; this is the subject of the present work. Nine CAs samples, DS from 0.83 to 3.0 were synthesized; their films were prepared. The following solvatochromic probes have been employed in order to determine the empirical polarity, E (T)(33); ""acidity, alpha""; ""basicity, beta"", and ""dipolarity/polarizability, pi*"" of the casted films: 2,6-dichloro-4-(2,4,6-triphenyl-pyridinium-1-yl) phenolate, WB; 4-nitroaniline; 4-nitroanisole; 4-nitro-N,N-dimethylaniline; 2,6-diphenyl-4-(2,4,6-triphenyl-pyridinium-1-yl)phenolate, RB. Additionally, two systems, ethanol plus ethyl acetate (EtOH-EtAc), and cellulose plus cellulose triacetate, CTA, were employed as models for CAs of different DS. Regarding the model systems, the following was observed: (i) For EtOH-EtAc, the dependence of all solvatochromic parameters on the ""equivalent-DS"" of the binary mixture was non-linear because of preferential solvation; (ii) The dependence of E (T)(33) on equivalent DS of the cellulose-CTA films is linear, but the slope is smaller than that of the corresponding plot for CAs. This is attributed to the more efficient hydrogen bonding in the model system, a conclusion corroborated by IR measurements. The dependence of solvatochromic parameters of CAs on their DS is described by the simple equations; a consequence of the substitution of the OH by the ester group. The thermal properties of bulk CAs samples were investigated by DSC and TGA; their dependence on DS is described by simple equations. The relevance of these data to the processing and applications of CAs is briefly discussed.
Resumo:
A literature survey and a theoretical study were performed to characterize residential chimney conditions for flue gas flow measurements. The focus is on Pitot-static probes to give sufficient basis for the development and calibration of a velocity pressure averaging probe suitable for the continuous dynamic (i.e. non steady state) measurement of the low flow velocities present in residential chimneys. The flow conditions do not meet the requirements set in ISO 10780 and ISO 3966 for Pitot-static probe measurements, and the methods and their uncertainties are not valid. The flow velocities in residential chimneys from a heating boiler under normal operating condi-tions are shown to be so low that they in some conditions result in voiding the assumptions of non-viscous fluid justifying the use of the quadratic Bernoulli equation. A non-linear Reynolds number dependent calibration coefficient that is correcting for the viscous effects is needed to avoid significant measurement errors. The wide range of flow velocity during normal boiler operation also results in the flow type changing from laminar, across the laminar to turbulent transition region, to fully turbulent flow, resulting in significant changes of the velocity profile during dynamic measurements. In addition, the short duct lengths (and changes of flow direction and duct shape) used in practice are shown to result in that the measurements are done in the hydrodynamic entrance region where the flow velocity profiles most likely are neither symmetrical nor fully developed. A measurement method insensitive to velocity profile changes is thus needed, if the flow velocity profile cannot otherwise be determined or predicted with reasonable accuracy for the whole measurement range. Because of particulate matter and condensing fluids in the flue gas it is beneficial if the probe can be constructed so that it can easily be taken out for cleaning, and equipped with a locking mechanism to always ensure the same alignment in the duct without affecting the calibration. The literature implies that there may be a significant time lag in the measurements of low flow rates due to viscous effects in the internal impact pressure passages of Pitot probes, and the significance in the discussed application should be studied experimentally. The measured differential pressures from Pitot-static probes in residential chimney flows are so low that the calibration and given uncertainties of commercially available pressure transducers are not adequate. The pressure transducers should be calibrated specifically for the application, preferably in combination with the probe, and the significance of all different error sources should be investigated carefully. Care should be taken also with the temperature measurement, e.g. with averaging of several sensors, as significant temperature gradients may be present in flue gas ducts.
Resumo:
This thesis develops and evaluates statistical methods for different types of genetic analyses, including quantitative trait loci (QTL) analysis, genome-wide association study (GWAS), and genomic evaluation. The main contribution of the thesis is to provide novel insights in modeling genetic variance, especially via random effects models. In variance component QTL analysis, a full likelihood model accounting for uncertainty in the identity-by-descent (IBD) matrix was developed. It was found to be able to correctly adjust the bias in genetic variance component estimation and gain power in QTL mapping in terms of precision. Double hierarchical generalized linear models, and a non-iterative simplified version, were implemented and applied to fit data of an entire genome. These whole genome models were shown to have good performance in both QTL mapping and genomic prediction. A re-analysis of a publicly available GWAS data set identified significant loci in Arabidopsis that control phenotypic variance instead of mean, which validated the idea of variance-controlling genes. The works in the thesis are accompanied by R packages available online, including a general statistical tool for fitting random effects models (hglm), an efficient generalized ridge regression for high-dimensional data (bigRR), a double-layer mixed model for genomic data analysis (iQTL), a stochastic IBD matrix calculator (MCIBD), a computational interface for QTL mapping (qtl.outbred), and a GWAS analysis tool for mapping variance-controlling loci (vGWAS).
Resumo:
This dissertation synthesizes previous research and develops a model for the study of strategic development, strategic congruence and management control. The model is used to analyze a longitudinal case study of the Swedish engineering company Atlas Copco. Employing contingency theory, the study confirms that long-term survival of a company requires adaption to contingencies. Three levels of strategy are examined: corporate, business and functional. Previous research suggests that consistency between these levels (strategic congruence) is necessary for a company to be competitive. The dissertation challenges this proposition by using a life-cycle perspective and analyzes strategic congruence in the different phases of a life cycle. It also studies management control from a life-cycle perspective. In this context, two types of management control are examined: formal and informal. From a longitudinal perspective, the study further discusses how these types interact during organizational life cycles. The dissertation shows that strategic development is more complex than previous studies have indicated. It is a long, complex and non-linear process, the results of which cannot always be predicted. Previous models for strategy and management control are based on simple relationships and rarely take into account the fact that companies often go through different phases of strategic development. The case study shows that strategic incongruence may occur at times during organizational life cycles. Furthermore, the use of management control varies over time. In the maturity phase, formal control is in focus, while the use of informal control has a bigger role in both the introduction and decline phases. Research on strategy and management control has intensified in recent years. Still there is a gap regarding the coordination of complex corporate structures. The present study contributes with further knowledge on how companies manage long-term strategic development. Few studies deal with more than two levels of strategy. Moreover, the present study addresses the need to understand strategic congruence from a life-cycle perspective. This is particularly relevant in practice, when management in large companies face difficult issues for which they expect business research to assist them in the decision-making process.
Resumo:
We present a new version of the hglm package for fittinghierarchical generalized linear models (HGLM) with spatially correlated random effects. A CAR family for conditional autoregressive random effects was implemented. Eigen decomposition of the matrix describing the spatial structure (e.g. the neighborhood matrix) was used to transform the CAR random effectsinto an independent, but heteroscedastic, gaussian random effect. A linear predictor is fitted for the random effect variance to estimate the parameters in the CAR model.This gives a computationally efficient algorithm for moderately sized problems (e.g. n<5000).
Resumo:
We present a new version (> 2.0) of the hglm package for fitting hierarchical generalized linear models (HGLMs) with spatially correlated random effects. CAR() and SAR() families for conditional and simultaneous autoregressive random effects were implemented. Eigen decomposition of the matrix describing the spatial structure (e.g., the neighborhood matrix) was used to transform the CAR/SAR random effects into an independent, but eteroscedastic, Gaussian random effect. A linear predictor is fitted for the random effect variance to estimate the parameters in the CAR and SAR models. This gives a computationally efficient algorithm for moderately sized problems.
Resumo:
Whether human capital increases or decreases wage uncertainty is an open ques- tion from an empirical standpoint. Yet, most policy prescriptions regarding human capital formation are based on models that impose riskiness on this type of invest- ment. We slightly deviate from the rest of the literature by allowing for non-linear income taxes in a two period model. This enables us to derive prescriptions that are robust to the risk characteristics of human capital: savings should be discouraged, human capital investments encouraged and both types of investment driven to an e¢ cient level from an aggregate perspective. These prescriptions are also robust to what choices are observed, even though the policy instruments used to implement them are not.
Resumo:
Estudos recentes apontam que diversas estratégias implementadas em hedge funds geram retornos com características não lineares. Seguindo as sugestões encontradas no paper de Agarwal e Naik (2004), este trabalho mostra que uma série de hedge funds dentro da indústria de fundos de investimentos no Brasil apresenta retornos que se assemelham ao de uma estratégia em opções de compra e venda no índice de mercado Bovespa. Partindo de um modelo de fatores, introduzimos um índice referenciado no retorno sobre opções de modo que tal fator possa explicar melhor que os tradicionais fatores de risco a característica não linear dos retornos dos fundos de investimento.
Resumo:
In this study, we verify the existence of predictability in the Brazilian equity market. Unlike other studies in the same sense, which evaluate original series for each stock, we evaluate synthetic series created on the basis of linear models of stocks. Following Burgess (1999), we use the “stepwise regression” model for the formation of models of each stock. We then use the variance ratio profile together with a Monte Carlo simulation for the selection of models with potential predictability. Unlike Burgess (1999), we carry out White’s Reality Check (2000) in order to verify the existence of positive returns for the period outside the sample. We use the strategies proposed by Sullivan, Timmermann & White (1999) and Hsu & Kuan (2005) amounting to 26,410 simulated strategies. Finally, using the bootstrap methodology, with 1,000 simulations, we find strong evidence of predictability in the models, including transaction costs.
Resumo:
The objective of this article is to study (understand and forecast) spot metal price levels and changes at monthly, quarterly, and annual horizons. The data to be used consists of metal-commodity prices in a monthly frequency from 1957 to 2012 from the International Financial Statistics of the IMF on individual metal series. We will also employ the (relatively large) list of co-variates used in Welch and Goyal (2008) and in Hong and Yogo (2009) , which are available for download. Regarding short- and long-run comovement, we will apply the techniques and the tests proposed in the common-feature literature to build parsimonious VARs, which possibly entail quasi-structural relationships between different commodity prices and/or between a given commodity price and its potential demand determinants. These parsimonious VARs will be later used as forecasting models to be combined to yield metal-commodity prices optimal forecasts. Regarding out-of-sample forecasts, we will use a variety of models (linear and non-linear, single equation and multivariate) and a variety of co-variates to forecast the returns and prices of metal commodities. With the forecasts of a large number of models (N large) and a large number of time periods (T large), we will apply the techniques put forth by the common-feature literature on forecast combinations. The main contribution of this paper is to understand the short-run dynamics of metal prices. We show theoretically that there must be a positive correlation between metal-price variation and industrial-production variation if metal supply is held fixed in the short run when demand is optimally chosen taking into account optimal production for the industrial sector. This is simply a consequence of the derived-demand model for cost-minimizing firms. Our empirical evidence fully supports this theoretical result, with overwhelming evidence that cycles in metal prices are synchronized with those in industrial production. This evidence is stronger regarding the global economy but holds as well for the U.S. economy to a lesser degree. Regarding forecasting, we show that models incorporating (short-run) commoncycle restrictions perform better than unrestricted models, with an important role for industrial production as a predictor for metal-price variation. Still, in most cases, forecast combination techniques outperform individual models.
Resumo:
The objective of this article is to study (understand and forecast) spot metal price levels and changes at monthly, quarterly, and annual frequencies. Data consists of metal-commodity prices at a monthly and quarterly frequencies from 1957 to 2012, extracted from the IFS, and annual data, provided from 1900-2010 by the U.S. Geological Survey (USGS). We also employ the (relatively large) list of co-variates used in Welch and Goyal (2008) and in Hong and Yogo (2009). We investigate short- and long-run comovement by applying the techniques and the tests proposed in the common-feature literature. One of the main contributions of this paper is to understand the short-run dynamics of metal prices. We show theoretically that there must be a positive correlation between metal-price variation and industrial-production variation if metal supply is held fixed in the short run when demand is optimally chosen taking into account optimal production for the industrial sector. This is simply a consequence of the derived-demand model for cost-minimizing firms. Our empirical evidence fully supports this theoretical result, with overwhelming evidence that cycles in metal prices are synchronized with those in industrial production. This evidence is stronger regarding the global economy but holds as well for the U.S. economy to a lesser degree. Regarding out-of-sample forecasts, our main contribution is to show the benefits of forecast-combination techniques, which outperform individual-model forecasts - including the random-walk model. We use a variety of models (linear and non-linear, single equation and multivariate) and a variety of co-variates and functional forms to forecast the returns and prices of metal commodities. Using a large number of models (N large) and a large number of time periods (T large), we apply the techniques put forth by the common-feature literature on forecast combinations. Empirically, we show that models incorporating (short-run) common-cycle restrictions perform better than unrestricted models, with an important role for industrial production as a predictor for metal-price variation.
Resumo:
Esta tese é composta por três ensaios sobre testes empíricos de curvas de Phillips, curvas IS e a interação entre as políticas fiscal e monetária. O primeiro ensaio ("Curvas de Phillips: um Teste Abrangente") testa curvas de Phillips usando uma especificação autoregressiva de defasagem distribuída (ADL) que abrange a curva de Phillips Aceleracionista (APC), a curva de Phillips Novo Keynesiana (NKPC), a curva de Phillips Híbrida (HPC) e a curva de Phillips de Informação Rígida (SIPC). Utilizamos dados dos Estados Unidos (1985Q1--2007Q4) e do Brasil (1996Q1--2012Q2), usando o hiato do produto e alternativamente o custo marginal real como medida de pressão inflacionária. A evidência empírica rejeita as restrições decorrentes da NKPC, da HPC e da SIPC, mas não rejeita aquelas da APC. O segundo ensaio ("Curvas IS: um Teste Abrangente") testa curvas IS usando uma especificação ADL que abrange a curva IS Keynesiana tradicional (KISC), a curva IS Novo Keynesiana (NKISC) e a curva IS Híbrida (HISC). Utilizamos dados dos Estados Unidos (1985Q1--2007Q4) e do Brasil (1996Q1--2012Q2). A evidência empírica rejeita as restrições decorrentes da NKISC e da HISC, mas não rejeita aquelas da KISC. O terceiro ensaio ("Os Efeitos da Política Fiscal e suas Interações com a Política Monetária") analisa os efeitos de choques na política fiscal sobre a dinâmica da economia e a interação entre as políticas fiscal e monetária usando modelos SVARs. Testamos a Teoria Fiscal do Nível de Preços para o Brasil analisando a resposta do passivo do setor público a choques no superávit primário. Para a identificação híbrida, encontramos que não é possível distinguir empiricamente entre os regimes Ricardiano (Dominância Monetária) e não-Ricardiano (Dominância Fiscal). Entretanto, utilizando a identificação de restrições de sinais, existe evidência que o governo seguiu um regime Ricardiano (Dominância Monetária) de janeiro de 2000 a junho de 2008.