67 resultados para misspecification


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Em estudos ecológicos é importante entender os processos que determinam a distribuição dos organismos. O estudo da distribuição de animais com alta capacidade de locomoção é um desafio para pesquisadores em todo o mundo. Modelos de uso de habitat são ferramentas poderosas para entender as relações entre animais e o ambiente. Com o desenvolvimento dos Sistemas de Informação Geográfica (SIG ou GIS, em inglês), modelos de uso de habitat são utilizados nas análises de dados ecológicos. Entretanto, modelos de uso de habitat frequentemente sofrem com especificações inapropriadas. Especificamente, o pressuposto de independência, que é importante para modelos estatísticos, pode ser violado quando as observações são coletadas no espaço. A Autocorrelação Espacial (SAC) é um problema em estudos ecológicos e deve ser considerada e corrigida. Nesta tese, modelos generalizados lineares com autovetores espaciais foram usados para investigar o uso de habitat dos cetáceos em relação a variáveis fisiográficas, oceanográficas e antrópicas em Cabo Frio, RJ, Brasil, especificamente: baleia-de-Bryde, Balaenoptera edeni (Capítulo 1); golfinho nariz-de-garrafa, Tursiops truncatus (Capítulo 2); Misticetos e odontocetos em geral (Capítulo 3). A baleia-de-Bryde foi influenciada pela Temperatura Superficial do Mar Minima e Máxima, no qual a faixa de temperatura mais usada pela baleia condiz com a faixa de ocorrência de sardinha-verdadeira, Sardinella brasiliensis, durante a desova (22 a 28C). Para o golfinho nariz-de-garrafa o melhor modelo indicou que estes eram encontrados em Temperatura Superficial do Mar baixas, com alta variabilidade e altas concentrações de clorofila. Tanto misticetos quanto os odontocetos usam em proporções similares as áreas contidas em Unidades de Conservação (UCs) quanto as áreas não são parte de UCs. Os misticetos ocorreram com maior frequência mais afastados da costa, em baixas temperaturas superficiais do mar e com altos valores de variabilidade para a temperatura. Os odontocetos usaram duas áreas preferencialmente: as áreas com as menores profundidades dentro da área de estudo e nas maiores profundidade. Eles usaram também habitats com águas frias e com alta concentração de clorofila. Tanto os misticetos quanto os odontocetos foram encontrados com mais frequência em distâncias de até 5km das embarcações de turismo e mergulho. Identificar habitats críticos para os cetáceos é um primeiro passo crucial em direção a sua conservação

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recreational fisheries in the waters off the northeast U.S. target a variety of pelagic and demersal fish species, and catch and effort data sampled from recreational fisheries are a critical component of the information used in resource evaluation and management. Standardized indices of stock abundance developed from recreational fishery catch rates are routinely used in stock assessments. The statistical properties of both simulated and empirical recreational fishery catch-rate data such as those collected by the National Marine Fisheries Service (NMFS) Marine Recreational Fishery Statistics Survey (MRFSS) are examined, and the potential effects of different assumptions about the error structure of the catch-rate frequency distributions in computing indices of stock abundance are evaluated. Recreational fishery catch distributions sampled by the MRFSS are highly contagious and overdispersed in relation to the normal distribution and are generally best characterized by the Poisson or negative binomial distributions. The modeling of both the simulated and empirical MRFSS catch rates indicates that one may draw erroneous conclusions about stock trends by assuming the wrong error distribution in procedures used to developed standardized indices of stock abundance. The results demonstrate the importance of considering not only the overall model fit and significance of classification effects, but also the possible effects of model misspecification, when determining the most appropriate model construction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sonic Hedgehog (Shh) signaling is an important determinant of vertebrate retinal ganglion cell (RGC) development. In mice, there are two major RGC populations: (1) the Islet2-expressing contralateral projecting (c)RGCs, which both produce and respond to Shh; and (2) the Zic2-expressing ipsilateral projecting RGCs (iRGCs), which lack Shh expression. In contrast to cRGCs, iRGCs, which are generated in the ventrotemporal crescent (VTC) of the retina, specifically express Boc, a cell adhesion molecule that acts as a high-affinity receptor for Shh. In Boc −/− mutant mice, the ipsilateral projection is significantly decreased. Here, we demonstrate that this phenotype results, at least in part, from the misspecification of a proportion of iRGCs. In Boc−/− VTC, the number of Zic2-positive RGCs is reduced, whereas more Islet2/Shh-positive RGCs are observed, a phenotype also detected in Zic2 and Foxd1 null embryos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

© 2014, The International Biometric Society.A potential venue to improve healthcare efficiency is to effectively tailor individualized treatment strategies by incorporating patient level predictor information such as environmental exposure, biological, and genetic marker measurements. Many useful statistical methods for deriving individualized treatment rules (ITR) have become available in recent years. Prior to adopting any ITR in clinical practice, it is crucial to evaluate its value in improving patient outcomes. Existing methods for quantifying such values mainly consider either a single marker or semi-parametric methods that are subject to bias under model misspecification. In this article, we consider a general setting with multiple markers and propose a two-step robust method to derive ITRs and evaluate their values. We also propose procedures for comparing different ITRs, which can be used to quantify the incremental value of new markers in improving treatment selection. While working models are used in step I to approximate optimal ITRs, we add a layer of calibration to guard against model misspecification and further assess the value of the ITR non-parametrically, which ensures the validity of the inference. To account for the sampling variability of the estimated rules and their corresponding values, we propose a resampling procedure to provide valid confidence intervals for the value functions as well as for the incremental value of new markers for treatment selection. Our proposals are examined through extensive simulation studies and illustrated with the data from a clinical trial that studies the effects of two drug combinations on HIV-1 infected patients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tutoring is commonly employed to prevent early reading failure, and evidence suggests that it can have a positive effect. This article presents findings from a large-scale (n = 734) randomized controlled trial evaluation of the effect of Time to Read—a volunteer tutoring program aimed at children aged 8 to 9 years—on reading comprehension, self-esteem, locus of control, enjoyment of learning, and future aspirations. The study found that the program had only a relatively small effect on children’s aspirations (effect size +0.17, 95% confidence interval [0.015, 0.328]) and no other outcomes. It is suggested that this lack of evidence found may be due to misspecification of the program logic model and outcomes identified and program-related factors, particularly the low dosage of the program.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper employs the one-sector Real Business Cycle model as a testing ground for four different procedures to estimate Dynamic Stochastic General Equilibrium (DSGE) models. The procedures are: 1 ) Maximum Likelihood, with and without measurement errors and incorporating Bayesian priors, 2) Generalized Method of Moments, 3) Simulated Method of Moments, and 4) Indirect Inference. Monte Carlo analysis indicates that all procedures deliver reasonably good estimates under the null hypothesis. However, there are substantial differences in statistical and computational efficiency in the small samples currently available to estimate DSGE models. GMM and SMM appear to be more robust to misspecification than the alternative procedures. The implications of the stochastic singularity of DSGE models for each estimation method are fully discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Le prix efficient est latent, il est contaminé par les frictions microstructurelles ou bruit. On explore la mesure et la prévision de la volatilité fondamentale en utilisant les données à haute fréquence. Dans le premier papier, en maintenant le cadre standard du modèle additif du bruit et le prix efficient, on montre qu’en utilisant le volume de transaction, les volumes d’achat et de vente, l’indicateur de la direction de transaction et la différence entre prix d’achat et prix de vente pour absorber le bruit, on améliore la précision des estimateurs de volatilité. Si le bruit n’est que partiellement absorbé, le bruit résiduel est plus proche d’un bruit blanc que le bruit original, ce qui diminue la misspécification des caractéristiques du bruit. Dans le deuxième papier, on part d’un fait empirique qu’on modélise par une forme linéaire de la variance du bruit microstructure en la volatilité fondamentale. Grâce à la représentation de la classe générale des modèles de volatilité stochastique, on explore la performance de prévision de différentes mesures de volatilité sous les hypothèses de notre modèle. Dans le troisième papier, on dérive de nouvelles mesures réalizées en utilisant les prix et les volumes d’achat et de vente. Comme alternative au modèle additif standard pour les prix contaminés avec le bruit microstructure, on fait des hypothèses sur la distribution du prix sans frictions qui est supposé borné par les prix de vente et d’achat.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The assimilation of measurements from the stratosphere and mesosphere is becoming increasingly common as the lids of weather prediction and climate models rise into the mesosphere and thermosphere. However, the dynamics of the middle atmosphere pose specific challenges to the assimilation of measurements from this region. Forecast-error variances can be very large in the mesosphere and this can render assimilation schemes very sensitive to the details of the specification of forecast error correlations. An example is shown where observations in the stratosphere are able to produce increments in the mesosphere. Such sensitivity of the assimilation scheme to misspecification of covariances can also amplify any existing biases in measurements or forecasts. Since both models and measurements of the middle atmosphere are known to have biases, the separation of these sources of bias remains a issue. Finally, well-known deficiencies of assimilation schemes, such as the production of imbalanced states or the assumption of zero bias, are proposed explanations for the inaccurate transport resulting from assimilated winds. The inability of assimilated winds to accurately transport constituents in the middle atmosphere remains a fundamental issue limiting the use of assimilated products for applications involving longer time-scales.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tests for business cycle asymmetries are developed for Markov-switching autoregressive models. The tests of deepness, steepness, and sharpness are Wald statistics, which have standard asymptotics. For the standard two-regime model of expansions and contractions, deepness is shown to imply sharpness (and vice versa), whereas the process is always nonsteep. Two and three-state models of U.S. GNP growth are used to illustrate the approach, along with models of U.S. investment and consumption growth. The robustness of the tests to model misspecification, and the effects of regime-dependent heteroscedasticity, are investigated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we proposed a new two-parameter lifetime distribution with increasing failure rate, the complementary exponential geometric distribution, which is complementary to the exponential geometric model proposed by Adamidis and Loukas (1998). The new distribution arises on a latent complementary risks scenario, in which the lifetime associated with a particular risk is not observable; rather, we observe only the maximum lifetime value among all risks. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulas for its reliability and failure rate functions, moments, including the mean and variance, variation coefficient, and modal value. The parameter estimation is based on the usual maximum likelihood approach. We report the results of a misspecification simulation study performed in order to assess the extent of misspecification errors when testing the exponential geometric distribution against our complementary one in the presence of different sample size and censoring percentage. The methodology is illustrated on four real datasets; we also make a comparison between both modeling approaches. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When assessing the psychometric properties of measures and estimate relations among latent variables, many studies in the social sciences (including marketing) often fail to comprehensively appraise the directionality of indicants. Such failures can lead to model misspecification and inaccurate parameter estimates (Jarvis et al. 2003). In order to further assess the correct directionality of a ‘media consumption’ construct’s indicants, this paper employs confirmatory tetrad analysis (CTA). Previous studies advocate this construct being best viewed as formative. However, our CTA suggests it could be modelled using a reflective orientation. We then conclude the paper drawing recommendations for future studies advocating that when assessing item directionality researchers should implement pre and post hoc tests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the effectiveness of several well-known parametric and non-parametric event study test statistics with security price data from the major Asia-Pacific security markets. Extensive Monte Carlo simulation experiments with actual daily security returns data reveal that the parametric test statistics are prone to misspecification with Asia-Pacific returns data. Two non-parametric tests, a rank test [Corrado and Zivney (Corrado, C.J., Zivney, T.L., 1992, The specification and power of the sign test in event study hypothesis tests using daily stock returns, Journal of Financial and Quantitative Analysis 27(3), 465-478)] and a sign test [Cowan (Cowan, A.R., 1992, Non-parametric event study tests, Review of Quantitative Finance and Accounting 1(4), 343–358)] were the best performers overall with market model excess returns computed using an equal weight index.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Simultaneous volatility models are developed and shown to be separate from multivariate GARCH estimators. An example is provided that allows for simultaneous and unidirectional volatility and volume of trade effects. These effects are tested using intraday data from the Australian cash index and index futures markets. Overnight volatility spillover effects from the United States S&P500 index futures markets are tested using alternative estimates of this US market volatility. The simultaneous volatility model proves to be robust to alternative specifications of returns equations and to misspecification of the direction of volatility causality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – When assessing the psychometric properties of measures and estimate relations among latent variables, many studies in the social sciences (including management and marketing) often fail to comprehensively appraise the directionality of indicants. Such failures can lead to model misspecification and inaccurate parameter estimates. The purpose of this paper is to apply a post hoc test called confirmatory vanishing tetrad analysis (CTA hereafter) to a single construct called mass media consumption information exposure, which antecedent studies conceptually posited to be a formative (causative) representation.
Design/methodology/approach – This paper analyses a consumer sample of 585 US respondents and applies the CTA test to a single construct by its inclusion in various matrices within a statistical analysis system-macro that takes into account nonnormal data characteristics. The matrices are derived from Mplus 5 through the estimation of a single-factor congeneric model. The CTA test calculates a test statistic similar to an asymptotic x2 distribution with degrees of freedom equal to the number of nonredundant tetrads tested.
Findings – The preliminary data analyses reveal that the data characteristics are nonnormal which is not uncommon in social research. The CTA results reveal that the reflective (emergent) item orientation cannot be fully ruled out as being the correct model representation. This is in contrast to prior theoretical conceptual work which would strongly support this construct being a formative representation.
Originality/value – Insofar as the authors are aware, there is no paper with a particular focus on how the CTA might not provide sound results with a demonstrated example. The paper makes a valuable contribution by discussing modelling philosophy and a procedure for directionality testing. The authors advocate the implementation of pre and post hoc tests as a key component of standard research practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Balancing tests are diagnostics designed for use with propensity score methods, a widely used non-experimental approach in the evaluation literature. Such tests provide useful information on whether plausible counterfactuals have been created. Currently, multiple balancing tests exist in the literature but it is unclear which is the most useful. This article highlights the poor size properties of commonly employed balancing tests and attempts to shed some light on the link between the results of balancing tests and bias of the evaluation estimator. The simulation results suggest that in scenarios where the conditional independence assumption holds, a permutation version of the balancing test described in Dehejia and Wahba (Rev Econ Stat 84:151–161, 2002) can be useful in applied study. The proposed test has good size properties. In addition, the test appears to have good power for detecting a misspecification in the link function and some power for detecting an omission of relevant non-linear terms involving variables that are included at a lower order.