227 resultados para Pseudorandom Permutation
Resumo:
Esta dissertação apresenta o desenvolvimento de um novo algoritmo de criptografia de chave pública. Este algoritmo apresenta duas características que o tornam único, e que foram tomadas como guia para a sua concepção. A primeira característica é que ele é semanticamente seguro. Isto significa que nenhum adversário limitado polinomialmente consegue obter qualquer informação parcial sobre o conteúdo que foi cifrado, nem mesmo decidir se duas cifrações distintas correspondem ou não a um mesmo conteúdo. A segunda característica é que ele depende, para qualquer tamanho de texto claro, de uma única premissa de segurança: que o logaritmo no grupo formado pelos pontos de uma curva elíptica de ordem prima seja computacionalmente intratável. Isto é obtido garantindo-se que todas as diferentes partes do algoritmo sejam redutíveis a este problema. É apresentada também uma forma simples de estendê-lo a fim de que ele apresente segurança contra atacantes ativos, em especial, contra ataques de texto cifrado adaptativos. Para tanto, e a fim de manter a premissa de que a segurança do algoritmo seja unicamente dependente do logaritmo elíptico, é apresentada uma nova função de resumo criptográfico (hash) cuja segurança é baseada no mesmo problema.
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).
Resumo:
This work presents a hybrid approach for the supplier selection problem in Supply Chain Management. We joined decision-making philosophy by researchers from business school and researchers from engineering in order to deal with the problem more extensively. We utilized traditional multicriteria decision-making methods, like AHP and TOPSIS, in order to evaluate alternatives according decision maker s preferences. The both techiniques were modeled by using definitions from the Fuzzy Sets Theory to deal with imprecise data. Additionally, we proposed a multiobjetive GRASP algorithm to perform an order allocation procedure between all pre-selected alternatives. These alternatives must to be pre-qualified on the basis of the AHP and TOPSIS methods before entering the LCR. Our allocation procedure has presented low CPU times for five pseudorandom instances, containing up to 1000 alternatives, as well as good values for all considered objectives. This way, we consider the proposed model as appropriate to solve the supplier selection problem in the SCM context. It can be used to help decision makers in reducing lead times, cost and risks in their supply chain. The proposed model can also improve firm s efficiency in relation to business strategies, according decision makers, even when a large number of alternatives must be considered, differently from classical models in purchasing literature
Resumo:
Nonparametric simple-contrast estimates for one-way layouts based on Hodges-Lehmann estimators for two samples and confidence intervals for all contrasts involving only two treatments are found in the literature.Tests for such contrasts are performed from the distribution of the maximum of the rank sum between two treatments. For random block designs, simple contrast estimates based on Hodges-Lehmann estimators for one sample are presented. However, discussions concerning the significance levels of more complex contrast tests in nonparametric statistics are not well outlined.This work aims at presenting a methodology to obtain p-values for any contrast types based on the construction of the permutations required by each design model using a C-language program for each design type. For small samples, all possible treatment configurations are performed in order to obtain the desired p-value. For large samples, a fixed number of random configurations are used. The program prompts the input of contrast coefficients, but does not assume the existence or orthogonality among them.In orthogonal contrasts, the decomposition of the value of the suitable statistic for each case is performed and it is observed that the same procedure used in the parametric analysis of variance can be applied in the nonparametric case, that is, each of the orthogonal contrasts has a chi(2) distribution with one degree of freedom. Also, the similarities between the p-values obtained for nonparametric contrasts and those obtained through approximations suggested in the literature are discussed.
Resumo:
Objective: To investigate the effects of the standard (Class II) Balters bionator in growing patients with Class II malocclusion with mandibular retrusion by using morphometrics (thin-plate spline [TPS] analysis). Materials and Methods: Thirty-one Class II patients (17 male and 14 female) were treated with the Balters bionator (bionator group). Mean age at the start of treatment (T0) was 10.3 years, while it was 13 years at the end of treatment (T1). Mean treatment time was 2 years and 2 months. The control group consisted of 22 subjects (14 male and 8 female) with untreated Class II malocclusion. Mean age at T0 was 10.2 years, while it was 12.2 years at T1. The observation period lasted 2 years on average. TPS analysis evaluated statistical (permutation tests) differences in the craniofacial shape and size between the bionator and control groups. Results: Through TPS analysis (deformation grids) the bionator group showed significant shape changes in the mandible that could be described as a mandibular forward and downward displacement. The control group showed no statistically significant differences in the correction of Class II malocclusion. Conclusions: Bionator appliance is able to induce significant mandibular shape changes that lead to the correction of Class II dentoskeletal disharmony. © 2013 by The EH Angle Education and Research Foundation, Inc.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA
Resumo:
Pós-graduação em Educação Matemática - IGCE
Resumo:
A malária é uma doença parasitária causada por protozoários do gênero Plasmodium que completam seu ciclo de desenvolvimento alternando entre hospedeiros humanos e mosquitos do gênero Anopheles. No contexto mundial, constitui um grave problema de saúde pública que afeta principalmente os países em desenvolvimento de clima tropical e subtropical. No Brasil, acredita-se que 99,5% dos casos registrados de malária encontram-se na Amazônia Legal. Muito do sucesso deste agravo nesta região deve-se a fatores biológicos e ambientais que favorecem níveis altos de vetores, além de fatores sociais que comprometem os esforços para controlar a doença. Assim, o objetivo deste trabalho foi estudar o perfil epidemiológico da malária no Pará, durante uma série histórica (1999 – 2003), analisando a influência de variáveis ambientais e socioeconômicas sobre a prevalência dos casos. Para tal foram calculados os índices parasitários anuais (IPA) de cada município e, através de um SIG, estes dados foram georreferenciados e estudados de forma temporal e espacial. Dados sobre o desmatamento no Estado foram analisados, através de uma regressão por permutação, para tentar explicar a variação temporal da malária. Para o estudo espacial foi testada (regressão múltipla) a influência das variáveis: temperatura, pluviosidade, altitude, educação, longevidade e renda, sobre a prevalência da malária. No estudo temporal a malária apresentou uma tendência decrescente no Estado, entretanto, apenas 31 municípios apresentaram a mesma tendência, não houve tendência crescente, os 112 municípios restantes apresentaram tendência estável. Além disso, muitos municípios alternaram aumento e diminuição dos casos ao longo da série, indicando uma boa ação de controle, mas uma fraca atuação da vigilância. Neste contexto o desmatamento parece influenciar a série temporal da malária, obteve-se resultados significativos em dois (2001 e 2002) dos três anos estudados. No estudo espacial o modelo final adotado, apesar de uma baixo poder explicativo (R²=0.31), apresentou três variáveis significativas: número de meses secos, renda e educação. No entanto, o resultado das duas primeiras não se apresenta de uma forma direta, sendo reflexo de outras atividades. Apesar da escala adotada e de problemas na agregação dos dados (só estão disponíveis por município), este trabalho apresenta resultados relevantes que podem auxiliar os gestores da saúde (ou de endemias) a direcionar ações de controle para as áreas apontadas como críticas, atuando nos fatores de maior significância, obtendo assim melhor aproveitamento dos recursos humanos e materiais disponíveis.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Ciência dos Materiais - FEIS
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)