997 resultados para Conditionally specified models
Resumo:
Interaction effects are usually modeled by means of moderated regression analysis. Structural equation models with non-linear constraints make it possible to estimate interaction effects while correcting for measurement error. From the various specifications, Jöreskog and Yang's (1996, 1998), likely the most parsimonious, has been chosen and further simplified. Up to now, only direct effects have been specified, thus wasting much of the capability of the structural equation approach. This paper presents and discusses an extension of Jöreskog and Yang's specification that can handle direct, indirect and interaction effects simultaneously. The model is illustrated by a study of the effects of an interactive style of use of budgets on both company innovation and performance
Resumo:
Canopy interception of incident precipitation is a critical component of the forest water balance during each of the four seasons. Models have been developed to predict precipitation interception from standard meteorological variables because of acknowledged difficulty in extrapolating direct measurements of interception loss from forest to forest. No known study has compared and validated canopy interception models for a leafless deciduous forest stand in the eastern United States. Interception measurements from an experimental plot in a leafless deciduous forest in northeastern Maryland (39°42'N, 75°5'W) for 11 rainstorms in winter and early spring 2004/05 were compared to predictions from three models. The Mulder model maintains a moist canopy between storms. The Gash model requires few input variables and is formulated for a sparse canopy. The WiMo model optimizes the canopy storage capacity for the maximum wind speed during each storm. All models showed marked underestimates and overestimates for individual storms when the measured ratio of interception to gross precipitation was far more or less, respectively, than the specified fraction of canopy cover. The models predicted the percentage of total gross precipitation (PG) intercepted to within the probable standard error (8.1%) of the measured value: the Mulder model overestimated the measured value by 0.1% of PG; the WiMo model underestimated by 0.6% of PG; and the Gash model underestimated by 1.1% of PG. The WiMo model’s advantage over the Gash model indicates that the canopy storage capacity increases logarithmically with the maximum wind speed. This study has demonstrated that dormant-season precipitation interception in a leafless deciduous forest may be satisfactorily predicted by existing canopy interception models.
Resumo:
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.
Resumo:
Although difference-stationary (DS) and trend-stationary (TS) processes have been subject to considerable analysis, there are no direct comparisons for each being the data-generation process (DGP). We examine incorrect choice between these models for forecasting for both known and estimated parameters. Three sets of Monte Carlo simulations illustrate the analysis, to evaluate the biases in conventional standard errors when each model is mis-specified, compute the relative mean-square forecast errors of the two models for both DGPs, and investigate autocorrelated errors, so both models can better approximate the converse DGP. The outcomes are surprisingly different from established results.
Resumo:
This article examines the ability of several models to generate optimal hedge ratios. Statistical models employed include univariate and multivariate generalized autoregressive conditionally heteroscedastic (GARCH) models, and exponentially weighted and simple moving averages. The variances of the hedged portfolios derived using these hedge ratios are compared with those based on market expectations implied by the prices of traded options. One-month and three-month hedging horizons are considered for four currency pairs. Overall, it has been found that an exponentially weighted moving-average model leads to lower portfolio variances than any of the GARCH-based, implied or time-invariant approaches.
Resumo:
Runoff generation processes and pathways vary widely between catchments. Credible simulations of solute and pollutant transport in surface waters are dependent on models which facilitate appropriate, catchment-specific representations of perceptual models of the runoff generation process. Here, we present a flexible, semi-distributed landscape-scale rainfall-runoff modelling toolkit suitable for simulating a broad range of user-specified perceptual models of runoff generation and stream flow occurring in different climatic regions and landscape types. PERSiST (the Precipitation, Evapotranspiration and Runoff Simulator for Solute Transport) is designed for simulating present-day hydrology; projecting possible future effects of climate or land use change on runoff and catchment water storage; and generating hydrologic inputs for the Integrated Catchments (INCA) family of models. PERSiST has limited data requirements and is calibrated using observed time series of precipitation, air temperature and runoff at one or more points in a river network. Here, we apply PERSiST to the river Thames in the UK and describe a Monte Carlo tool for model calibration, sensitivity and uncertainty analysis
Resumo:
Mixed models may be defined with or without reference to sampling, and can be used to predict realized random effects, as when estimating the latent values of study subjects measured with response error. When the model is specified without reference to sampling, a simple mixed model includes two random variables, one stemming from an exchangeable distribution of latent values of study subjects and the other, from the study subjects` response error distributions. Positive probabilities are assigned to both potentially realizable responses and artificial responses that are not potentially realizable, resulting in artificial latent values. In contrast, finite population mixed models represent the two-stage process of sampling subjects and measuring their responses, where positive probabilities are only assigned to potentially realizable responses. A comparison of the estimators over the same potentially realizable responses indicates that the optimal linear mixed model estimator (the usual best linear unbiased predictor, BLUP) is often (but not always) more accurate than the comparable finite population mixed model estimator (the FPMM BLUP). We examine a simple example and provide the basis for a broader discussion of the role of conditioning, sampling, and model assumptions in developing inference.
Resumo:
We analyze data obtained from a study designed to evaluate training effects on the performance of certain motor activities of Parkinson`s disease patients. Maximum likelihood methods were used to fit beta-binomial/Poisson regression models tailored to evaluate the effects of training on the numbers of attempted and successful specified manual movements in 1 min periods, controlling for disease stage and use of the preferred hand. We extend models previously considered by other authors in univariate settings to account for the repeated measures nature of the data. The results suggest that the expected number of attempts and successes increase with training, except for patients with advanced stages of the disease using the non-preferred hand. Copyright (c) 2008 John Wiley & Sons, Ltd.
Resumo:
This thesis consists of four manuscripts in the area of nonlinear time series econometrics on topics of testing, modeling and forecasting nonlinear common features. The aim of this thesis is to develop new econometric contributions for hypothesis testing and forecasting in these area. Both stationary and nonstationary time series are concerned. A definition of common features is proposed in an appropriate way to each class. Based on the definition, a vector nonlinear time series model with common features is set up for testing for common features. The proposed models are available for forecasting as well after being well specified. The first paper addresses a testing procedure on nonstationary time series. A class of nonlinear cointegration, smooth-transition (ST) cointegration, is examined. The ST cointegration nests the previously developed linear and threshold cointegration. An Ftypetest for examining the ST cointegration is derived when stationary transition variables are imposed rather than nonstationary variables. Later ones drive the test standard, while the former ones make the test nonstandard. This has important implications for empirical work. It is crucial to distinguish between the cases with stationary and nonstationary transition variables so that the correct test can be used. The second and the fourth papers develop testing approaches for stationary time series. In particular, the vector ST autoregressive (VSTAR) model is extended to allow for common nonlinear features (CNFs). These two papers propose a modeling procedure and derive tests for the presence of CNFs. Including model specification using the testing contributions above, the third paper considers forecasting with vector nonlinear time series models and extends the procedures available for univariate nonlinear models. The VSTAR model with CNFs and the ST cointegration model in the previous papers are exemplified in detail,and thereafter illustrated within two corresponding macroeconomic data sets.
Resumo:
Location Models are usedfor planning the location of multiple service centers in order to serve a geographicallydistributed population. A cornerstone of such models is the measure of distancebetween the service center and a set of demand points, viz, the location of thepopulation (customers, pupils, patients and so on). Theoretical as well asempirical evidence support the current practice of using the Euclidian distancein metropolitan areas. In this paper, we argue and provide empirical evidencethat such a measure is misleading once the Location Models are applied to ruralareas with heterogeneous transport networks. This paper stems from the problemof finding an optimal allocation of a pre-specified number of hospitals in alarge Swedish region with a low population density. We conclude that the Euclidianand the network distances based on a homogenous network (equal travel costs inthe whole network) give approximately the same optimums. However networkdistances calculated from a heterogeneous network (different travel costs indifferent parts of the network) give widely different optimums when the numberof hospitals increases. In terms ofaccessibility we find that the recent closure of hospitals and the in-optimallocation of the remaining ones has increased the average travel distance by 75%for the population. Finally, aggregation the population misplaces the hospitalsby on average 10 km.
Resumo:
Models with interacting dark energy can alleviate the cosmic coincidence problem by allowing dark matter and dark energy to evolve in a similar fashion. At a fundamental level, these models are specified by choosing a functional form for the scalar potential and for the interaction term. However, in order to compare to observational data it is usually more convenient to use parametrizations of the dark energy equation of state and the evolution of the dark matter energy density. Once the relevant parameters are fitted, it is important to obtain the shape of the fundamental functions. In this paper I show how to reconstruct the scalar potential and the scalar interaction with dark matter from general parametrizations. I give a few examples and show that it is possible for the effective equation of state for the scalar field to cross the phantom barrier when interactions are allowed. I analyze the uncertainties in the reconstructed potential arising from foreseen errors in the estimation of fit parameters and point out that a Yukawa-like linear interaction results from a simple parametrization of the coupling.
Resumo:
Th17 cells have emerged as a proinflamatory cell type with strong links to autoimmunity and immunopathology. The aims of this thesis are two-fold; Firstly, generation of a novel mouse model that allows in vivo and/or ex vivo observation and manipulation of Th17 cells. Secondly, to generate a mouse model capable of conditionally overexpressing the hallmark Th17 cytokine, IL-17A. Given the expertise and experience in our lab with respect to conditional gene targeting, Cre-LoxP-mediated approaches were chosen and utilized to achieve this goal in both mouse models. The resulting strains and the knowledge generated from their useage are discussed in this work. Furthermore, the recently generated IL-6Rα conditional allele allows for ablation of IL-6 signaling in a cell type-specific manner. We wanted to analyze the role of IL-6 signaling with respect to EAE pathogenesis and development of pathogenic Th17 cells, and the results generated are published in this work.
Resumo:
Investigators interested in whether a disease aggregates in families often collect case-control family data, which consist of disease status and covariate information for families selected via case or control probands. Here, we focus on the use of case-control family data to investigate the relative contributions to the disease of additive genetic effects (A), shared family environment (C), and unique environment (E). To this end, we describe a ACE model for binary family data and then introduce an approach to fitting the model to case-control family data. The structural equation model, which has been described previously, combines a general-family extension of the classic ACE twin model with a (possibly covariate-specific) liability-threshold model for binary outcomes. Our likelihood-based approach to fitting involves conditioning on the proband’s disease status, as well as setting prevalence equal to a pre-specified value that can be estimated from the data themselves if necessary. Simulation experiments suggest that our approach to fitting yields approximately unbiased estimates of the A, C, and E variance components, provided that certain commonly-made assumptions hold. These assumptions include: the usual assumptions for the classic ACE and liability-threshold models; assumptions about shared family environment for relative pairs; and assumptions about the case-control family sampling, including single ascertainment. When our approach is used to fit the ACE model to Austrian case-control family data on depression, the resulting estimate of heritability is very similar to those from previous analyses of twin data.
Resumo:
Various inference procedures for linear regression models with censored failure times have been studied extensively. Recent developments on efficient algorithms to implement these procedures enhance the practical usage of such models in survival analysis. In this article, we present robust inferences for certain covariate effects on the failure time in the presence of "nuisance" confounders under a semiparametric, partial linear regression setting. Specifically, the estimation procedures for the regression coefficients of interest are derived from a working linear model and are valid even when the function of the confounders in the model is not correctly specified. The new proposals are illustrated with two examples and their validity for cases with practical sample sizes is demonstrated via a simulation study.
Resumo:
Suppose that we are interested in establishing simple, but reliable rules for predicting future t-year survivors via censored regression models. In this article, we present inference procedures for evaluating such binary classification rules based on various prediction precision measures quantified by the overall misclassification rate, sensitivity and specificity, and positive and negative predictive values. Specifically, under various working models we derive consistent estimators for the above measures via substitution and cross validation estimation procedures. Furthermore, we provide large sample approximations to the distributions of these nonsmooth estimators without assuming that the working model is correctly specified. Confidence intervals, for example, for the difference of the precision measures between two competing rules can then be constructed. All the proposals are illustrated with two real examples and their finite sample properties are evaluated via a simulation study.