804 resultados para Null Hypothesis
Resumo:
Este trabalho teve por objetivo investigar a existência de diferentes níveis de empatia entre estudantes universitários de cursos preparatórios para profissões classificadas como assistenciais e não assistenciais. Nos últimos anos tem-se evidenciado a presença de um nível elevado de empatia nos terapeutas que provocam mudanças construtivas em seus clientes. Por outro lado, pesquisas levadas a efeito revelaram que pessoas necessitadas de ajuda para problemas pessoais, têm encontrado alivio para seus problemas junto a outros profissionais não terapeutas. Partindo-se destas evidências, duas questões foram levantadas: 19) a média dos alunos dos grupos de profissões assistenciais é maior do que a média dos alunos dos grupos de profissões não assistenciais? 29) Considerando-se cada profissão individualmente, a média dos alunos, em cada curso, apresenta diferenças significativas? Para testar as hipóteses nulas, os sujeitos foram submetidos ao Recognition Assessment-Empathy e as médias encontradas, comparadas através de um teste t. Por se tratar de um estudo preliminar, onde foram usados estudantes, os resultados encontrados devem ser encarados com as naturais reservas, recomendando-se que o mesmo seja replicado usando-se uma amostra de profissionais experientes.
Resumo:
Este trabalho pretende investigar os estereótipos positivos e ou negativos quando consideradas como variáveis sexo, status e cor. Foi usado um instrumento composto de oito desenhos, estímulo e quatro desenhos - resposta. A amostra foi composta de 48 sujeitos universitários e 34 sujeitos semi-alfabetizados. A hipótese básica testada foi que: Ha estereótipos positivos e negativos relativos a sexo, status e cor em situações de conotação positiva (trabalho e escola) e de conotação negativa (botquim e vadiagem). A hipótese nula básica foi rejeitada, nos dois grupos, no que diz respeito as variáveis sexo e status, mas não o foi em relação a variável cor. Foram feitas sugestões para pesquisas futuras.
Resumo:
O presente estudo visa a levantar alguns aspectos relevantes relacionados com o estudo de carência da criança institucionalizada. Pretende-se ensaiar uma avaliação do atraso verbal e motor que tal situação acarreta, bem como da recuperação correspondente que a adoção permite. A escassez de tentativas congêneres em nosso meio, visando a trazer à luz dados de realidade sobre este campo, foi o suporte que originou e motivou o presente trabalho. Para esse fim, procedeu-se preliminarmente a uma revisão à literatura, objetivando fundamentar no plano teórico o trabalho não só à luz da psicologia do desenvo1vimento, bem como a luz da legislação relativa à adoção. Realizou-se um estudo exploratório formulando-se o testando-se hipóteses operacionais extraídas da hipótese geral, de haver diferença significativa, em termos do atraso motor e verbal no desenvolvimento e na recuperação motora e verbal em relação a idade inicial e final, bem como a duração da permanência, da criança institucionalizada e posteriormente adotada. As hipóteses foram testadas em termos estatísticos, através do coeficiente de correlação de Pearson adotando um nível de significância de 0,05 para rejeição da hipótese nula.
Resumo:
Os impactos das variações climáticas tem sido um tema amplamente pesquisado na macroeconomia mundial e também em setores como agricultura, energia e seguros. Já para o setor de varejo, uma busca nos principais periódicos brasileiros não retornou nenhum estudo específico. Em economias mais desenvolvidas produtos de seguros atrelados ao clima são amplamente negociados e através deste trabalho visamos também avaliar a possibilidade de desenvolvimento deste mercado no Brasil. O presente trabalho buscou avaliar os impactos das variações climáticas nas vendas do varejo durante período de aproximadamente 18 meses (564 dias) para 253 cidades brasileiras. As informações de variações climáticas (precipitação, temperatura, velocidade do vento, umidade relativa, insolação e pressão atmosférica) foram obtidas através do INMET (Instituto Nacional de Meteorologia) e cruzadas com as informações transacionais de até 206 mil clientes ativos de uma amostra não balanceada, oriundos de uma instituição financeira do ramo de cartões de crédito. Ambas as bases possuem periodicidade diária. A metodologia utilizada para o modelo econométrico foram os dados de painel com efeito fixo para avaliação de dados longitudinais através dos softwares de estatística / econometria EViews (software proprietário da IHS) e R (software livre). A hipótese nula testada foi de que o clima influencia nas decisões de compra dos clientes no curto prazo, hipótese esta provada pelas análises realizadas. Assumindo que o comportamento do consumidor do varejo não muda devido à seleção do meio de pagamento, ao chover as vendas do varejo em moeda local são impactadas negativamente. A explicação está na redução da quantidade total de transações e não o valor médio das transações. Ao excluir da base as cidades de São Paulo e Rio de Janeiro não houve alteração na significância e relevância dos resultados. Por outro lado, a chuva possui efeito de substituição entre as vendas online e offline. Quando analisado setores econômicos para observar se há comportamento diferenciado entre consumo e compras não observou-se alteração nos resultados. Ao incluirmos variáveis demográficas, concluímos que as mulheres e pessoas com maior faixa de idade apresentam maior histórico de compras. Ao avaliar o impacto da chuva em um determinado dia e seu impacto nos próximos 6 à 29 dias observamos que é significante para a quantidade de transações porém o impacto no volume de vendas não foi significante.
Resumo:
A tese apresenta três ensaios empíricos sobre os padrões decisórios de magistrados no Brasil, elaborados à partir de bases de dados inéditas e de larga escala, que contém detalhes de dezenas de milhares de processos judiciais na primeira e na segunda instância. As bases de dados são coletadas pelo próprio autor através de programas-robô de coleta em massa de informações, aplicados aos "links" de acompanhamento processual de tribunais estaduais no Brasil (Paraná, Minas Gerais e Santa Catarina). O primeiro artigo avalia - com base em modelo estatístico - a importância de fatores extra-legais sobre os resultados de ações judiciais, na Justiça Estadual do Paraná. Isto é, se os juízes favorecem sistematicamente a parte hipossuficiente (beneficiária de Assistência Judiciária Gratuita). No segundo artigo, estuda-se a relação entre a duração de ações cíveis no primeiro grau e a probabilidade de reforma da sentença, utilizando-se dados da Justiça Estadual de Minas Gerais. O objetivo é avaliar se existe um dilema entre a duração e a qualidade das sentenças. Dito de outra forma, se existe um dilema entre a observância do direito ao devido processo legal e a celeridade processual. O último artigo teste a hipótese - no âmbito de apelações criminais e incidentes recursais no Tribunal de Justiça de Santa Catarina - de que as origens profissionais dos desembargadores influenciam seus padrões decisórios. Isto é, testa-se a hipótese de que desembargadores/relatores oriundos da carreira da advocacia são mais "garantistas" ( e desembargadores oriundos da carreira do Ministério Público são menos "garantistas") relativamente aos seus pares oriundos da carreira da magistratura. Testam-se as hipóteses com base em um modelo estatístico que explica a probabilidade de uma decisão recursal favorável ao réu, em função da origem de carreira do relator do recurso, além de um conjunto de características do processo e do órgão julgador.
Resumo:
This paper introduces a residual based test where the null hypothesis of c:&InOvement between two processes with local persistenc~ can be tested, even under the presence of an endogenous regressor. It, therefore, fills in an existing lacuna in econometrics, in which longrun relationships can also be tested if the dependent and independent variables do not have a unit root, but do exhibit local persistence.
Resumo:
In this paper, we show that the widely used stationarity tests such as the KPSS test has power close to size in the presence of time-varying unconditional variance. We propose a new test as a complement of the existing tests. Monte Carlo experiments show that the proposed test possesses the following characteristics: (i) In the presence of unit root or a structural change in the mean, the proposed test is as powerful as the KPSS and other tests; (ii) In the presence a changing variance, the traditional tests perform badly whereas the proposed test has high power comparing to the existing tests; (iii) The proposed test has the same size as traditional stationarity tests under the null hypothesis of covariance stationarity. An application to daily observations of return on US Dollar/Euro exchange rate reveals the existence of instability in the unconditional variance when the entire sample is considered, but stability is found in sub-samples.
Resumo:
This paper considers two-sided tests for the parameter of an endogenous variable in an instrumental variable (IV) model with heteroskedastic and autocorrelated errors. We develop the nite-sample theory of weighted-average power (WAP) tests with normal errors and a known long-run variance. We introduce two weights which are invariant to orthogonal transformations of the instruments; e.g., changing the order in which the instruments appear. While tests using the MM1 weight can be severely biased, optimal tests based on the MM2 weight are naturally two-sided when errors are homoskedastic. We propose two boundary conditions that yield two-sided tests whether errors are homoskedastic or not. The locally unbiased (LU) condition is related to the power around the null hypothesis and is a weaker requirement than unbiasedness. The strongly unbiased (SU) condition is more restrictive than LU, but the associated WAP tests are easier to implement. Several tests are SU in nite samples or asymptotically, including tests robust to weak IV (such as the Anderson-Rubin, score, conditional quasi-likelihood ratio, and I. Andrews' (2015) PI-CLC tests) and two-sided tests which are optimal when the sample size is large and instruments are strong. We refer to the WAP-SU tests based on our weights as MM1-SU and MM2-SU tests. Dropping the restrictive assumptions of normality and known variance, the theory is shown to remain valid at the cost of asymptotic approximations. The MM2-SU test is optimal under the strong IV asymptotics, and outperforms other existing tests under the weak IV asymptotics.
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.
Resumo:
In this work we focus on tests for the parameter of an endogenous variable in a weakly identi ed instrumental variable regressionmodel. We propose a new unbiasedness restriction for weighted average power (WAP) tests introduced by Moreira and Moreira (2013). This new boundary condition is motivated by the score e ciency under strong identi cation. It allows reducing computational costs of WAP tests by replacing the strongly unbiased condition. This latter restriction imposes, under the null hypothesis, the test to be uncorrelated to a given statistic with dimension given by the number of instruments. The new proposed boundary condition only imposes the test to be uncorrelated to a linear combination of the statistic. WAP tests under both restrictions to perform similarly numerically. We apply the di erent tests discussed to an empirical example. Using data from Yogo (2004), we assess the e ect of weak instruments on the estimation of the elasticity of inter-temporal substitution of a CCAPM model.
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).
Resumo:
This Master Thesis consists of one theoretical article and one empirical article on the field of Microeconometrics. The first chapter\footnote{We also thank useful suggestions by Marinho Bertanha, Gabriel Cepaluni, Brigham Frandsen, Dalia Ghanem, Ricardo Masini, Marcela Mello, Áureo de Paula, Cristine Pinto, Edson Severnini and seminar participants at São Paulo School of Economics, the California Econometrics Conference 2015 and the 37\textsuperscript{th} Brazilian Meeting of Econometrics.}, called \emph{Synthetic Control Estimator: A Generalized Inference Procedure and Confidence Sets}, contributes to the literature about inference techniques of the Synthetic Control Method. This methodology was proposed to answer questions involving counterfactuals when only one treated unit and a few control units are observed. Although this method was applied in many empirical works, the formal theory behind its inference procedure is still an open question. In order to fulfill this lacuna, we make clear the sufficient hypotheses that guarantee the adequacy of Fisher's Exact Hypothesis Testing Procedure for panel data, allowing us to test any \emph{sharp null hypothesis} and, consequently, to propose a new way to estimate Confidence Sets for the Synthetic Control Estimator by inverting a test statistic, the first confidence set when we have access only to finite sample, aggregate level data whose cross-sectional dimension may be larger than its time dimension. Moreover, we analyze the size and the power of the proposed test with a Monte Carlo experiment and find that test statistics that use the synthetic control method outperforms test statistics commonly used in the evaluation literature. We also extend our framework for the cases when we observe more than one outcome of interest (simultaneous hypothesis testing) or more than one treated unit (pooled intervention effect) and when heteroskedasticity is present. The second chapter, called \emph{Free Economic Area of Manaus: An Impact Evaluation using the Synthetic Control Method}, is an empirical article. We apply the synthetic control method for Brazilian city-level data during the 20\textsuperscript{th} Century in order to evaluate the economic impact of the Free Economic Area of Manaus (FEAM). We find that this enterprise zone had positive significant effects on Real GDP per capita and Services Total Production per capita, but it also had negative significant effects on Agriculture Total Production per capita. Our results suggest that this subsidy policy achieve its goal of promoting regional economic growth, even though it may have provoked mis-allocation of resources among economic sectors.
Resumo:
Researchers often rely on the t-statistic to make inference on parameters in statistical models. It is common practice to obtain critical values by simulation techniques. This paper proposes a novel numerical method to obtain an approximately similar test. This test rejects the null hypothesis when the test statistic islarger than a critical value function (CVF) of the data. We illustrate this procedure when regressors are highly persistent, a case in which commonly-used simulation methods encounter dificulties controlling size uniformly. Our approach works satisfactorily, controls size, and yields a test which outperforms the two other known similar tests.
Resumo:
This work aims to study the fluctuation structure of physical properties of oil well profiles. It was used as technique the analysis of fluctuations without trend (Detrended Fluctuation Analysis - DFA). It has been made part of the study 54 oil wells in the Campo de Namorado located in the Campos Basin in Rio de Janeiro. We studied five sections, namely: sonic, density, porosity, resistivity and gamma rays. For most of the profiles , DFA analysis was available in the literature, though the sonic perfile was estimated with the aid of a standard algorithm. The comparison between the exponents of DFA of the five profiles was performed using linear correlation of variables, so we had 10 comparisons of profiles. Our null hypothesis is that the values of DFA for the various physical properties are independent. The main result indicates that no refutation of the null hypothesis. That is, the fluctuations observed by DFA in the profiles do not have a universal character, that is, in general the quantities display a floating structure of their own. From the ten correlations studied only the profiles of density and sonic one showed a significant correlation (p> 0.05). Finally these results indicate that one should use the data from DFA with caution, because, in general, based on geological analysis DFA different profiles can lead to disparate conclusions
Resumo:
Body image is the figure of our bodies built in our minds and the degree of dissatisfaction is often associated with risk factors identified by anthropometric measures. The purpose of this descriptive study was to evaluate the risk factors associated to morphological and functional variables associate to the perception of auto-image in middle-aged walkers of the south zone of the city of Natal. A hundred and thirty volunteers had been evaluated in four groups in function of the gender and age group. As measurement evaluations were used an auto-image perception questionnaire proposed by Stunkart of nine silhouettes numbered for both gender was applied; a weighing machine equipped with stadiometer for the body mass (kg) and stature (m) and the body mass index (kg/m2) that was calculated with base in measures of the body weight and stature and classified according to norms of the National Institute of Health (2000) as well as the systolic and diastolic blood pressure by a electronic digital device (DIGITRONIC). A metal anthropometric tape was used for the waist to hip ratio (WHR). It was used Analyses of variance (ANOVA) one-way, post hoc of Tukey and correlation of Spearman for the nonparametric data adopting the level of ρ≤ 0,05 for rejection of the null hypothesis. The body mass index indicated high factors of risk in the consisting groups. In all the groups were registered the desire to reduce their silhouettes. The body weight shows reduced when compared with the younger group in the male group of superior age group, while in the female group the inverse one occurs. The autoimage perception is associated with the classification of the waist to hip ratio in the female gender in the age group of the 50 to the 59 years and in the classification of the body mass index of all constituted groups. Significant associations had not been found for classification of the systolic and diastolic blood pressure in relation to the auto-image 41 perception. This thesis presents relation of interdisciplinarity and its contents have application in the fields of Physical Education, Medicine, Physiotherapy and Nursing