969 resultados para Synthetic control chart


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The data represents a cyclic pattern in process variable presented in a control chart due to periodic rotation of operators, seasonal or environmental changes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The data represents a systematic pattern in process variable presented in a control chart due to measurement gauge rotation and power fluctuations in manufacturing processes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O objetivo deste trabalho é avaliar o impacto da criação de linhas de financiamento ao parque exibidor cinematográfico brasileiro, uma das formas existentes de incentivo governamental ao setor. Os desembolsos avaliados, realizados pelo BNDES com recursos do Procult e do FSA de 2007 a 2012, consistem em crédito de longo prazo para a criação de salas de cinema, com juros abaixo do mercado e estrutura de garantias flexível. A metodologia econométrica utilizada é o controle sintético, tal como formalizada por Abadie et al. (2010). Sob esse método, não foi possível identificar contribuição positiva da política de crédito quando se confronta o desempenho individual dos exibidores beneficiados versus seus respectivos controles sintéticos, medido pela evolução das variáveis número de salas e público. Ademais, testou-se um possível efeito agregado, considerando a evolução do número de ingressos per capita no Brasil, também não sendo possível identificar contribuição positiva da política sobre tal indicador.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This Master Thesis consists of one theoretical article and one empirical article on the field of Microeconometrics. The first chapter\footnote{We also thank useful suggestions by Marinho Bertanha, Gabriel Cepaluni, Brigham Frandsen, Dalia Ghanem, Ricardo Masini, Marcela Mello, Áureo de Paula, Cristine Pinto, Edson Severnini and seminar participants at São Paulo School of Economics, the California Econometrics Conference 2015 and the 37\textsuperscript{th} Brazilian Meeting of Econometrics.}, called \emph{Synthetic Control Estimator: A Generalized Inference Procedure and Confidence Sets}, contributes to the literature about inference techniques of the Synthetic Control Method. This methodology was proposed to answer questions involving counterfactuals when only one treated unit and a few control units are observed. Although this method was applied in many empirical works, the formal theory behind its inference procedure is still an open question. In order to fulfill this lacuna, we make clear the sufficient hypotheses that guarantee the adequacy of Fisher's Exact Hypothesis Testing Procedure for panel data, allowing us to test any \emph{sharp null hypothesis} and, consequently, to propose a new way to estimate Confidence Sets for the Synthetic Control Estimator by inverting a test statistic, the first confidence set when we have access only to finite sample, aggregate level data whose cross-sectional dimension may be larger than its time dimension. Moreover, we analyze the size and the power of the proposed test with a Monte Carlo experiment and find that test statistics that use the synthetic control method outperforms test statistics commonly used in the evaluation literature. We also extend our framework for the cases when we observe more than one outcome of interest (simultaneous hypothesis testing) or more than one treated unit (pooled intervention effect) and when heteroskedasticity is present. The second chapter, called \emph{Free Economic Area of Manaus: An Impact Evaluation using the Synthetic Control Method}, is an empirical article. We apply the synthetic control method for Brazilian city-level data during the 20\textsuperscript{th} Century in order to evaluate the economic impact of the Free Economic Area of Manaus (FEAM). We find that this enterprise zone had positive significant effects on Real GDP per capita and Services Total Production per capita, but it also had negative significant effects on Agriculture Total Production per capita. Our results suggest that this subsidy policy achieve its goal of promoting regional economic growth, even though it may have provoked mis-allocation of resources among economic sectors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This Master Thesis presents a case study on the use of Statistical Process Control (SPC) at the Núcleo de Pesquisas em Alimentos e Medicamentos (NUPLAM). The SPC basic tools have been applied in the process of the tuberculostáticos drugs encapsulation, primarily concerning the objective to choose, between two speeds, which one is the best one to perform the tuberculostatics encapsulation. Later on, with the company effectively operating, the SPC was applied intending to know the variability of the process and, through the tracking of the process itself, to arrive at an estimated limit for the control of future lots of tuberculostatics of equal dosage. As special causes were detected acting in the process, a cause-and-effect diagram was built in order to try to discover, in each factor that composes the productive process, the possible causes of variation of the capsules average weight. The hypotheses raised will be able to serve as a base for deepened the study to eliminate or reduce these interferences in the process. Also a study on the capacity of the process to attend the specifications was carried out, and this study has shown the process´s inaptitude to take care of them. However, on the side of NUPLAM exists a real yearning to implant the SPC and consequently to improve the existing quality already present on its medicines

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes a procedure to control on-line processes for attributes, using an Shewhart control chart with two control limits (warning limit and control limit) and will be based on a sequence of inspection (h). The inspection procedure is based on Taguchi et al. (1989), in which to inspect the item, if the number of non-conformities is higher than an upper control limit, the process needs to be stopped and some adjustment is required; and, if the last inspection h, from all items inspected present a number of non-conformities between the control limit and warning limit. The items inspected will suffer destructive inspection, being discarded after inspection. Properties of an ergodic Markov chain are used to get the expression of average cost per item and the aim was the determination of four optimized parameters: the sampling interval of the inspections (m); the constant W to draw the warning limit (W); the constant C to draw the control limit (C), where W £ C, and the length of sequence of inspections (h). Numerical examples illustrate the proposed procedure

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes a new control chart to monitor a process mean employing a combined npx-X control chart. Basically the procedure consists of splitting the sample of size n into two sub-samples n1 and n2 determined by an optimization search. The sampling occur in two-stages. In the first stage the units of the sub-sample n1 are evaluated by attributes and plotted in npx control chart. If this chart signs then units of second sub-sample are measured and the monitored statistic plotted in X control chart (second stage). If both control charts sign then the process is stopped for adjustment. The possibility of non-inspection in all n items may promote a reduction not only in the cost but also the time spent to examine the sampled items. Performances of the current proposal, individual X and npx control charts are compared. In this study the proposed procedure presents many competitive options for the X control chart for a sample size n and a shift from the target mean. The average time to sign (ATS) of the current proposal lower than the values calculated from an individual X control chart points out that the combined control chart is an efficient tool in monitoring process mean.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Statistical analysis of data is crucial in cephalometric investigations. There are certainly excellent examples of good statistical practice in the field, but some articles published worldwide have carried out inappropriate analyses. Objective: The purpose of this study was to show that when the double records of each patient are traced on the same occasion, a control chart for differences between readings needs to be drawn, and limits of agreement and coefficients of repeatability must be calculated. Material and methods: Data from a well-known paper in Orthodontics were used for showing common statistical practices in cephalometric investigations and for proposing a new technique of analysis. Results: A scatter plot of the two radiograph readings and the two model readings with the respective regression lines are shown. Also, a control chart for the mean of the differences between radiograph readings was obtained and a coefficient of repeatability was calculated. Conclusions: A standard error assuming that mean differences are zero, which is referred to in Orthodontics and Facial Orthopedics as the Dahlberg error, can be calculated only for estimating precision if accuracy is already proven. When double readings are collected, limits of agreement and coefficients of repeatability must be calculated. A graph with differences of readings should be presented and outliers discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)