884 resultados para Macro features
Resumo:
The relationship between the structure and function of biological networks constitutes a fundamental issue in systems biology. Particularly, the structure of protein-protein interaction networks is related to important biological functions. In this work, we investigated how such a resilience is determined by the large scale features of the respective networks. Four species are taken into account, namely yeast Saccharomyces cerevisiae, worm Caenorhabditis elegans, fly Drosophila melanogaster and Homo sapiens. We adopted two entropy-related measurements (degree entropy and dynamic entropy) in order to quantify the overall degree of robustness of these networks. We verified that while they exhibit similar structural variations under random node removal, they differ significantly when subjected to intentional attacks (hub removal). As a matter of fact, more complex species tended to exhibit more robust networks. More specifically, we quantified how six important measurements of the networks topology (namely clustering coefficient, average degree of neighbors, average shortest path length, diameter, assortativity coefficient, and slope of the power law degree distribution) correlated with the two entropy measurements. Our results revealed that the fraction of hubs and the average neighbor degree contribute significantly for the resilience of networks. In addition, the topological analysis of the removed hubs indicated that the presence of alternative paths between the proteins connected to hubs tend to reinforce resilience. The performed analysis helps to understand how resilience is underlain in networks and can be applied to the development of protein network models.
Resumo:
Since last two decades researches have been working on developing systems that can assistsdrivers in the best way possible and make driving safe. Computer vision has played a crucialpart in design of these systems. With the introduction of vision techniques variousautonomous and robust real-time traffic automation systems have been designed such asTraffic monitoring, Traffic related parameter estimation and intelligent vehicles. Among theseautomatic detection and recognition of road signs has became an interesting research topic.The system can assist drivers about signs they don’t recognize before passing them.Aim of this research project is to present an Intelligent Road Sign Recognition System basedon state-of-the-art technique, the Support Vector Machine. The project is an extension to thework done at ITS research Platform at Dalarna University [25]. Focus of this research work ison the recognition of road signs under analysis. When classifying an image its location, sizeand orientation in the image plane are its irrelevant features and one way to get rid of thisambiguity is to extract those features which are invariant under the above mentionedtransformation. These invariant features are then used in Support Vector Machine forclassification. Support Vector Machine is a supervised learning machine that solves problemin higher dimension with the help of Kernel functions and is best know for classificationproblems.
Resumo:
This paper studies a special class of vector smooth-transition autoregressive (VSTAR) models that contains common nonlinear features (CNFs), for which we proposed a triangular representation and developed a procedure of testing CNFs in a VSTAR model. We first test a unit root against a stable STAR process for each individual time series and then examine whether CNFs exist in the system by Lagrange Multiplier (LM) test if unit root is rejected in the first step. The LM test has standard Chi-squared asymptotic distribution. The critical values of our unit root tests and small-sample properties of the F form of our LM test are studied by Monte Carlo simulations. We illustrate how to test and model CNFs using the monthly growth of consumption and income data of United States (1985:1 to 2011:11).
Resumo:
This paper investigates common nonlinear features in multivariate nonlinear autore-gressive models via testing the estimated residuals. A Wald-type test is proposed and itis asymptotically Chi-squared distributed. Simulation studies are given to examine thefinite-sample properties of the proposed test.
Resumo:
This thesis consists of four manuscripts in the area of nonlinear time series econometrics on topics of testing, modeling and forecasting nonlinear common features. The aim of this thesis is to develop new econometric contributions for hypothesis testing and forecasting in these area. Both stationary and nonstationary time series are concerned. A definition of common features is proposed in an appropriate way to each class. Based on the definition, a vector nonlinear time series model with common features is set up for testing for common features. The proposed models are available for forecasting as well after being well specified. The first paper addresses a testing procedure on nonstationary time series. A class of nonlinear cointegration, smooth-transition (ST) cointegration, is examined. The ST cointegration nests the previously developed linear and threshold cointegration. An Ftypetest for examining the ST cointegration is derived when stationary transition variables are imposed rather than nonstationary variables. Later ones drive the test standard, while the former ones make the test nonstandard. This has important implications for empirical work. It is crucial to distinguish between the cases with stationary and nonstationary transition variables so that the correct test can be used. The second and the fourth papers develop testing approaches for stationary time series. In particular, the vector ST autoregressive (VSTAR) model is extended to allow for common nonlinear features (CNFs). These two papers propose a modeling procedure and derive tests for the presence of CNFs. Including model specification using the testing contributions above, the third paper considers forecasting with vector nonlinear time series models and extends the procedures available for univariate nonlinear models. The VSTAR model with CNFs and the ST cointegration model in the previous papers are exemplified in detail,and thereafter illustrated within two corresponding macroeconomic data sets.
Resumo:
Background: Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods: We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results: Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion: The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.
Resumo:
The purpose of this paper is to analyze the performance of the Histograms of Oriented Gradients (HOG) as descriptors for traffic signs recognition. The test dataset consists of speed limit traffic signs because of their high inter-class similarities. HOG features of speed limit signs, which were extracted from different traffic scenes, were computed and a Gentle AdaBoost classifier was invoked to evaluate the different features. The performance of HOG was tested with a dataset consisting of 1727 Swedish speed signs images. Different numbers of HOG features per descriptor, ranging from 36 features up 396 features, were computed for each traffic sign in the benchmark testing. The results show that HOG features perform high classification rate as the Gentle AdaBoost classification rate was 99.42%, and they are suitable to real time traffic sign recognition. However, it is found that changing the number of orientation bins has insignificant effect on the classification rate. In addition to this, HOG descriptors are not robust with respect to sign orientation.
Resumo:
This study aims to investigate possible distinctions between professional and non-professional written travel texts all treating the same destination: the Norwegian ski resort Trysil. The study will investigate to what extent the different texts correlate with the genre of travel texts, as the travel texts are treated as personal narratives, and how they conform to a given structure for narratives and with guidelines for professional writers. Furthermore, the investigation aims to explore to what extent there are similarities and differences between the texts regarding the given structure. The texts will first be analysed and organized separately by macrorules and a news schema that are constructed specifically for these sorts of texts, in order to reveal their discourse structure, and then compared to each other. As the discourse structure of the different texts is revealed, it is seen that there are certain differences between the two different text types. Finally, seen that the text types differ in their structure, this study will show that despite the fact that journalists write stories, and that non-professional written stories are narratives, they do not share the same structure, and are constructed in different ways.
Resumo:
Este trabalho apresenta a pesquisa e o desenvolvimento da ferramenta para geração automática de leiautes WTROPIC. O WTROPIC é uma ferramenta para a geração remota, acessível via WWW, de leiautes para circuitos CMOS adequada ao projeto FUCAS e ao ambiente CAVE. O WTROPIC foi concebido a partir de otimizações realizadas na versão 3 da ferramenta TROPIC. É mostrado também, como as otimizações no leiaute do TROPIC foram implementadas e como essas otimizações permitem ao WTROPIC cerca de 10% de redução da largura dos circuitos gerados em comparação ao TROPIC. Como o TROPIC, o WTROPIC é um gerador de macro células CMOS independente de biblioteca. Apresenta-se também, como a ferramenta WTROPIC foi integrada ao ambiente de concepção de circuitos CAVE, as mudanças propostas para metodologia de integração de ferramentas do CAVE que conduzem a uma melhora na qualidade de integração e a padronização das interfaces de usuário e como a síntese física de um leiaute pode ser então realizada remotamente. Dessa maneira, obteve-se uma ferramenta para a concepção de leiautes disponível a qualquer usuário com acesso a internet, mesmo que esse usuário não disponha de uma máquina com elevada capacidade de processamento, normalmente exigido por ferramentas de CAD.
Resumo:
Analisam-se neste trabalho alguns aspectos macro e microeconômicos da economia brasileira Pós-Real. Do ponto de vista macroeconômico, aborda-se a troca de imposto inflacionário por poupança externa no fInanciamento do défIcit operacional do setor público, a evolução da denominação da dívida pública federal, bem como a pertinência de diferentes regimes cambiais em um ambiente de instabilidade de funções macroeconômicas. Inclui-se também nesta seção uma breve e simplifIcada simulação da evolução da renda per-capta em caso de elevação do nível de investimentos de 20% para 25% do PIB. Do ponto de vista microeconômico, apresentam-se algumas reflexões pertinentes às áreas de privatização, regulação e defesa da concorrência.
Resumo:
This paper investigates the degree of short run and long run co-movement in U.S. sectoral output data by estimating sectoraI trends and cycles. A theoretical model based on Long and Plosser (1983) is used to derive a reduced form for sectoral output from first principles. Cointegration and common features (cycles) tests are performed; sectoral output data seem to share a relatively high number of common trends and a relatively low number of common cycles. A special trend-cycle decomposition of the data set is performed and the results indicate a very similar cyclical behavior across sectors and a very different behavior for trends. Indeed. sectors cyclical components appear as one. In a variance decomposition analysis, prominent sectors such as Manufacturing and Wholesale/Retail Trade exhibit relatively important transitory shocks.
Resumo:
Este artigo avalia o impacto sobre a economia brasileira de uma reforma tributária que reduza distorções e cumulatividade, utilizando para tal experimento a atual proposta do Ministério da Fazenda. Utiliza-se um modelo recursivo dinâmico padrão calibrado de forma a se aproximar o máximo possível da economia brasileira hoje. A simulações são implementadas ao se introduzir parâmetros correspondentes à reforma tributária: desoneração da folha de pagamentos, redução da cumulatividade com introdução do IVA-F e a desoneração dos investimentos com a redução do prazo de restituição de créditos de ICMS. Estima-se que a reforma tributária proposta provocaria um aumento de 1,5 pontos percentuais na taxa de crescimento do produto nos oito anos seguintes a sua implementação e um ganho de longo prazo de 16%. O impacto sobre o nível de investimento seria muito expressivo, 40% no mesmo período, de modo que a taxa de investimento saltaria dos atuais 20% para quase 24%. Os ganhos de consumo e bem-estar também foram estimados como bastante signicativos.
Resumo:
The aim of this paper is to relate the macroeconomic and the microeconomic aspects of stabilization policies, with a special regard to the situation in Latin America, mainly Brazil and Argentina. In order to make this analysis, it was firstly necessary to rethink on a critical basis the failures of microfoundations of macroeconomics, and from this point on to assess the main problems of stabilization policies formulation if these critical considerations are not undertaken.
Resumo:
Este trabalho propõe maneiras alternativas para a estimação consistente de uma medida abstrata, crucial para o estudo de decisões intertemporais, o qual é central a grande parte dos estudos em macroeconomia e finanças: o Fator Estocástico de Descontos (SDF, sigla em Inglês). Pelo emprego da Equação de Apreçamento constrói-se um inédito estimador consistente do SDF que depende do fato de que seu logaritmo é comum a todos os ativos de uma economia. O estimador resultante é muito simples de se calcular, não depende de fortes hipóteses econômicas, é adequado ao teste de diversas especificações de preferência e para a investigação de paradoxos de substituição intertemporal, e pode ser usado como base para a construção de um estimador para a taxa livre de risco. Alternativas para a estratégia de identificação são aplicadas e um paralelo entre elas e estratégias de outras metodologias é traçado. Adicionando estrutura ao ambiente inicial, são apresentadas duas situações onde a distribuição assintótica pode ser derivada. Finalmente, as metodologias propostas são aplicadas a conjuntos de dados dos EUA e do Brasil. Especificações de preferência usualmente empregadas na literatura, bem como uma classe de preferências dependentes do estado, são testadas. Os resultados são particularmente interessantes para a economia americana. A aplicação de teste formais não rejeita especificações de preferências comuns na literatura e estimativas para o coeficiente relativo de aversão ao risco se encontram entre 1 e 2, e são estatisticamente indistinguíveis de 1. Adicionalmente, para a classe de preferência s dependentes do estado, trajetórias altamente dinâmicas são estimadas para a tal coeficiente, as trajetórias são confinadas ao intervalo [1,15, 2,05] e se rejeita a hipótese de uma trajetória constante.