931 resultados para Permutation Polynomial


Relevância:

10.00% 10.00%

Publicador:

Resumo:

One may construct, for any function on the integers, an irreducible module of level zero for affine sl(2) using the values of the function as structure constants. The modules constructed using exponential-polynomial functions realize the irreducible modules with finite-dimensional weight spaces in the category (O) over tilde of Chari. In this work, an expression for the formal character of such a module is derived using the highest weight theory of truncations of the loop algebra.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most of water distribution systems (WDS) need rehabilitation due to aging infrastructure leading to decreasing capacity, increasing leakage and consequently low performance of the WDS. However an appropriate strategy including location and time of pipeline rehabilitation in a WDS with respect to a limited budget is the main challenge which has been addressed frequently by researchers and practitioners. On the other hand, selection of appropriate rehabilitation technique and material types is another main issue which has yet to address properly. The latter can affect the environmental impacts of a rehabilitation strategy meeting the challenges of global warming mitigation and consequent climate change. This paper presents a multi-objective optimization model for rehabilitation strategy in WDS addressing the abovementioned criteria mainly focused on greenhouse gas (GHG) emissions either directly from fossil fuel and electricity or indirectly from embodied energy of materials. Thus, the objective functions are to minimise: (1) the total cost of rehabilitation including capital and operational costs; (2) the leakage amount; (3) GHG emissions. The Pareto optimal front containing optimal solutions is determined using Non-dominated Sorting Genetic Algorithm NSGA-II. Decision variables in this optimisation problem are classified into a number of groups as: (1) percentage proportion of each rehabilitation technique each year; (2) material types of new pipeline for rehabilitation each year. Rehabilitation techniques used here includes replacement, rehabilitation and lining, cleaning, pipe duplication. The developed model is demonstrated through its application to a Mahalat WDS located in central part of Iran. The rehabilitation strategy is analysed for a 40 year planning horizon. A number of conventional techniques for selecting pipes for rehabilitation are analysed in this study. The results show that the optimal rehabilitation strategy considering GHG emissions is able to successfully save the total expenses, efficiently decrease the leakage amount from the WDS whilst meeting environmental criteria.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neste trabalho analisamos processos estocásticos com decaimento polinomial (também chamado hiperbólico) da função de autocorrelação. Nosso estudo tem enfoque nas classes dos Processos ARFIMA e dos Processos obtidos à partir de iterações da transformação de Manneville-Pomeau. Os objetivos principais são comparar diversos métodos de estimação para o parâmetro fracionário do processo ARFIMA, nas situações de estacionariedade e não estacionariedade e, além disso, obter resultados similares para o parâmetro do processo de Manneville-Pomeau. Entre os diversos métodos de estimação para os parâmetros destes dois processos destacamos aquele baseado na teoria de wavelets por ser aquele que teve o melhor desempenho.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Parametric term structure models have been successfully applied to innumerous problems in fixed income markets, including pricing, hedging, managing risk, as well as studying monetary policy implications. On their turn, dynamic term structure models, equipped with stronger economic structure, have been mainly adopted to price derivatives and explain empirical stylized facts. In this paper, we combine flavors of those two classes of models to test if no-arbitrage affects forecasting. We construct cross section (allowing arbitrages) and arbitrage-free versions of a parametric polynomial model to analyze how well they predict out-of-sample interest rates. Based on U.S. Treasury yield data, we find that no-arbitrage restrictions significantly improve forecasts. Arbitrage-free versions achieve overall smaller biases and Root Mean Square Errors for most maturities and forecasting horizons. Furthermore, a decomposition of forecasts into forward-rates and holding return premia indicates that the superior performance of no-arbitrage versions is due to a better identification of bond risk premium.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Point pattern matching in Euclidean Spaces is one of the fundamental problems in Pattern Recognition, having applications ranging from Computer Vision to Computational Chemistry. Whenever two complex patterns are encoded by two sets of points identifying their key features, their comparison can be seen as a point pattern matching problem. This work proposes a single approach to both exact and inexact point set matching in Euclidean Spaces of arbitrary dimension. In the case of exact matching, it is assured to find an optimal solution. For inexact matching (when noise is involved), experimental results confirm the validity of the approach. We start by regarding point pattern matching as a weighted graph matching problem. We then formulate the weighted graph matching problem as one of Bayesian inference in a probabilistic graphical model. By exploiting the existence of fundamental constraints in patterns embedded in Euclidean Spaces, we prove that for exact point set matching a simple graphical model is equivalent to the full model. It is possible to show that exact probabilistic inference in this simple model has polynomial time complexity with respect to the number of elements in the patterns to be matched. This gives rise to a technique that for exact matching provably finds a global optimum in polynomial time for any dimensionality of the underlying Euclidean Space. Computational experiments comparing this technique with well-known probabilistic relaxation labeling show significant performance improvement for inexact matching. The proposed approach is significantly more robust under augmentation of the sizes of the involved patterns. In the absence of noise, the results are always perfect.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabalho de conclusão tem como tema a relação entre as especificações de resinas alquídicas e de tintas preparadas com estas resinas, com foco nas características de teor de sólidos e viscosidade, fazendo uso das técnicas projeto de experimentos, superfície de resposta e análise de regressão. O objetivo principal é o estudo dos limites de especificação ideais para a resina alquídica, de forma que os lotes de tinta apresentem propriedades dentro da especificação já ao final do processamento, reduzindo a incidência de lotes fabris com necessidade de ajuste. Como conseqüência, temos redução de retrabalho e lead time fabril, adequação de custo de produtos, maior qualidade intrínseca e maior confiança na sistemática de desenvolvimento, produção e controle de qualidade. Inicialmente, foi realizada uma revisão bibliográfica sobre a tecnologia de tintas e resinas alquídicas, conceitos de controle de qualidade, planejamento de experimentos, análise por superfície de resposta e análise de regressão. Na seqüência, foi conduzido o estudo de caso, realizado na empresa Killing S.A. Tintas e Adesivos, planta localizada na cidade de Novo Hamburgo. Os resultados experimentais indicaram modelos de regressão polinomial válidos para as propriedades avaliadas. Foram tomadas as propriedades teor de sólidos e viscosidade Copo Ford #2 da mistura A+B como parâmetros para análise dos limites de especificação da resina alquídica, onde se comprovou que a variabilidade atualmente permitida é excessiva. A aplicação dos modelos de regressão indicou novos limites de especificação para a resina alquídica, mais estreitos, viabilizando a obtenção de tintas com propriedades especificadas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present work aims to study the macroeconomic factors influence in credit risk for installment autoloans operations. The study is based on 4.887 credit operations surveyed in the Credit Risk Information System (SCR) hold by the Brazilian Central Bank. Using Survival Analysis applied to interval censured data, we achieved a model to estimate the hazard function and we propose a method for calculating the probability of default in a twelve month period. Our results indicate a strong time dependence for the hazard function by a polynomial approximation in all estimated models. The model with the best Akaike Information Criteria estimate a positive effect of 0,07% for males over de basic hazard function, and 0,011% for the increasing of ten base points on the operation annual interest rate, toward, for each R$ 1.000,00 on the installment, the hazard function suffer a negative effect of 0,28% , and an estimated elevation of 0,0069% for the same amount added to operation contracted value. For de macroeconomics factors, we find statistically significant effects for the unemployment rate (-0,12%) , for the one lag of the unemployment rate (0,12%), for the first difference of the industrial product index(-0,008%), for one lag of inflation rate (-0,13%) and for the exchange rate (-0,23%). We do not find statistic significant results for all other tested variables.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is well known that cointegration between the level of two variables (labeled Yt and yt in this paper) is a necessary condition to assess the empirical validity of a present-value model (PV and PVM, respectively, hereafter) linking them. The work on cointegration has been so prevalent that it is often overlooked that another necessary condition for the PVM to hold is that the forecast error entailed by the model is orthogonal to the past. The basis of this result is the use of rational expectations in forecasting future values of variables in the PVM. If this condition fails, the present-value equation will not be valid, since it will contain an additional term capturing the (non-zero) conditional expected value of future error terms. Our article has a few novel contributions, but two stand out. First, in testing for PVMs, we advise to split the restrictions implied by PV relationships into orthogonality conditions (or reduced rank restrictions) before additional tests on the value of parameters. We show that PV relationships entail a weak-form common feature relationship as in Hecq, Palm, and Urbain (2006) and in Athanasopoulos, Guillén, Issler and Vahid (2011) and also a polynomial serial-correlation common feature relationship as in Cubadda and Hecq (2001), which represent restrictions on dynamic models which allow several tests for the existence of PV relationships to be used. Because these relationships occur mostly with nancial data, we propose tests based on generalized method of moment (GMM) estimates, where it is straightforward to propose robust tests in the presence of heteroskedasticity. We also propose a robust Wald test developed to investigate the presence of reduced rank models. Their performance is evaluated in a Monte-Carlo exercise. Second, in the context of asset pricing, we propose applying a permanent-transitory (PT) decomposition based on Beveridge and Nelson (1981), which focus on extracting the long-run component of asset prices, a key concept in modern nancial theory as discussed in Alvarez and Jermann (2005), Hansen and Scheinkman (2009), and Nieuwerburgh, Lustig, Verdelhan (2010). Here again we can exploit the results developed in the common cycle literature to easily extract permament and transitory components under both long and also short-run restrictions. The techniques discussed herein are applied to long span annual data on long- and short-term interest rates and on price and dividend for the U.S. economy. In both applications we do not reject the existence of a common cyclical feature vector linking these two series. Extracting the long-run component shows the usefulness of our approach and highlights the presence of asset-pricing bubbles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta tese é composta de três artigos que analisam a estrutura a termo das taxas de juros usando diferentes bases de dados e modelos. O capítulo 1 propõe um modelo paramétrico de taxas de juros que permite a segmentação e choques locais na estrutura a termo. Adotando dados do tesouro americano, duas versões desse modelo segmentado são implementadas. Baseado em uma sequência de 142 experimentos de previsão, os modelos propostos são comparados à benchmarks e concluí-se que eles performam melhor nos resultados das previsões fora da amostra, especialmente para as maturidades curtas e para o horizonte de previsão de 12 meses. O capítulo 2 acrescenta restrições de não arbitragem ao estimar um modelo polinomial gaussiano dinâmico de estrutura a termo para o mercado de taxas de juros brasileiro. Esse artigo propõe uma importante aproximação para a série temporal dos fatores de risco da estrutura a termo, que permite a extração do prêmio de risco das taxas de juros sem a necessidade de otimização de um modelo dinâmico completo. Essa metodologia tem a vantagem de ser facilmente implementada e obtém uma boa aproximação para o prêmio de risco da estrutura a termo, que pode ser usada em diferentes aplicações. O capítulo 3 modela a dinâmica conjunta das taxas nominais e reais usando um modelo afim de não arbitagem com variáveis macroeconômicas para a estrutura a termo, afim de decompor a diferença entre as taxas nominais e reais em prêmio de risco de inflação e expectativa de inflação no mercado americano. Uma versão sem variáveis macroeconômicas e uma versão com essas variáveis são implementadas e os prêmios de risco de inflação obtidos são pequenos e estáveis no período analisado, porém possuem diferenças na comparação dos dois modelos analisados.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O processo de transformação da pele em couro envolve uma seqüência complexa de reações químicas e processos mecânicos, no qual o curtimento representa fundamental estágio, por propiciar à pele características como qualidade, estabilidade hidrotérmica e excelentes propriedades para uso. O sulfato básico de cromo trivalente é o agente curtente predominantemente empregado no curtimento de peles em todo o mundo. É produzido a partir do cromato de sódio, industrialmente obtido do minério de cromo. Consideráveis quantidades de resíduos sólidos contendo cromo são geradas pelas indústrias coureira e calçadista. Estes resíduos tem sido motivo de preocupação constante, uma vez que são considerados perigosos devido a presença do cromo. O processo de incineração destes resíduos é uma importante alternativa a ser considerada, em decorrência de suas características de redução de massa, volume e possibilidade de aproveitamento da energia térmica dos gases de combustão. O processo de incineração dos resíduos das indústrias coureira e calçadista dá origem a cinzas contendo cerca de 40% de cromo que pode ser submetida a um processo de recuperação. Este trabalho apresenta os resultados da pesquisa sobre a utilização das cinzas, provenientes da incineração dos resíduos sólidos da indústria coureira e da indústria calçadista, para a produção de cromato de sódio(VI). No processo de planejamento e de condução dos experimentos foram utilizadas as técnicas de Planejamento Fatorial 2k, Metodologia de Superfície de Resposta e Análise de Variância na avaliação da produção de cromato de sódio(VI). Os fatores investigados foram: temperatura, taxa de aquecimento, tempo de reação, vazão de ar e quantidade de dolomita. A partir das variáveis selecionadas identificaram-se como parâmetros importantes a temperatura e a taxa de aquecimento. As superfícies de resposta tridimensionais obtidas a partir dos modelos de segunda ordem ajustados aos dados experimentais, apresentaram o comportamento do efeito conjugado dos fatores temperatura e taxa de aquecimento sobre a variável resposta grau de oxidação, desde a temperatura de inicio da reação química até a temperatura limite utilizada industrialmente. As condições de operação do processo de produção de cromato de sódio(VI) foram otimizadas. Os níveis ótimos dos fatores de controle aplicados as cinzas dos resíduos da indústria calçadista, geradas em uma planta piloto com incinerador de leito fixo, com tecnologia de gaseificação e combustão combinadas, apresentaram um grau de oxidação superior a 96% para as cinzas coletadas no ciclone e de 99,5% para as cinzas coletas no reator de gaseificação. Os resíduos sólidos, as cinzas e o produto de reação foram caracterizados por análises químicas, fluorescência de raio-X, microscopia eletrônica de varredura e difração de raio-X.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabalho visa comparar, estatisticamente, o desempenho de duas estratégias de imunização de carteiras de renda fixa, que são recalibradas periodicamente. A primeira estratégia, duração, considera alterações no nível da estrutura a termo da taxa de juros brasileira, enquanto a abordagem alternativa tem como objetivo imunizar o portfólio contra oscilações em nível, inclinação e curvatura. Primeiro, estimamos a curva de juros a partir do modelo polinomial de Nelson & Siegel (1987) e Diebold & Li (2006). Segundo, imunizamos a carteira de renda fixa adotando o conceito de construção de hedge de Litterman & Scheinkman (1991), porém assumindo que as taxas de juros não são observadas. O portfólio imunizado pela estratégia alternativa apresenta empiricamente um desempenho estatisticamente superior ao procedimento de duração. Mostramos também que a frequência ótima de recalibragem é mensal na análise empírica.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we study the invariant rings for the Sylow p-subgroups of the nite classical groups. We have successfully constructed presentations for the invariant rings for the Sylow p-subgroups of the unitary groups GU(3; Fq2) and GU(4; Fq2 ), the symplectic group Sp(4; Fq) and the orthogonal group O+(4; Fq) with q odd. In all cases, we obtained a minimal generating set which is also a SAGBI basis. Moreover, we computed the relations among the generators and showed that the invariant ring for these groups are a complete intersection. This shows that, even though the invariant rings of the Sylow p-subgroups of the general linear group are polynomial, the same is not true for Sylow p-subgroups of general classical groups. We also constructed the generators for the invariant elds for the Sylow p-subgroups of GU(n; Fq2 ), Sp(2n; Fq), O+(2n; Fq), O-(2n + 2; Fq) and O(2n + 1; Fq), for every n and q. This is an important step in order to obtain the generators and relations for the invariant rings of all these groups.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Different levels of menthol as an anesthetic for dourado (Salminus brasiliensis) were evaluated in this study. Fish (n=32) with average weight of 194.13 +/- 9.06 g and total mean length of 25.30 +/- 0.90 cm were separated in four groups composed of 8 individuals. Each group was submitted to different menthol concentrations: 60, 90, 120 and 150 mg L(-1). Total induction time was analyzed by polynomial regression and other parameters by Tukey's test. All experimental fish exposed to different concentrations of menthol reached deep anesthesia stage without mortality. It was observed a negative linear effect (P<0.05) for total induction time. The longest recovery time (172.60 s) was observed for dourado treated with 120 mg. L(-1), which differed (P<0.05) from the 90 mg.L(-1) treatment (122.03 s). All levels evaluated in this study were safe and effective. A concentration of 60 mg L(-1) for dourado is suggested based on lower cost and adequate induction and recovery time responses.