907 resultados para Mixed model under selection
Resumo:
The fungal family Clavicipitaceae includes plant symbionts and parasites that produce several psychoactive and bioprotective alkaloids. The family includes grass symbionts in the epichloae clade (Epichloë and Neotyphodium species), which are extraordinarily diverse both in their host interactions and in their alkaloid profiles. Epichloae produce alkaloids of four distinct classes, all of which deter insects, and some—including the infamous ergot alkaloids—have potent effects on mammals. The exceptional chemotypic diversity of the epichloae may relate to their broad range of host interactions, whereby some are pathogenic and contagious, others are mutualistic and vertically transmitted (seed-borne), and still others vary in pathogenic or mutualistic behavior. We profiled the alkaloids and sequenced the genomes of 10 epichloae, three ergot fungi (Claviceps species), a morning-glory symbiont (Periglandula ipomoeae), and a bamboo pathogen (Aciculosporium take), and compared the gene clusters for four classes of alkaloids. Results indicated a strong tendency for alkaloid loci to have conserved cores that specify the skeleton structures and peripheral genes that determine chemical variations that are known to affect their pharmacological specificities. Generally, gene locations in cluster peripheries positioned them near to transposon-derived, AT-rich repeat blocks, which were probably involved in gene losses, duplications, and neofunctionalizations. The alkaloid loci in the epichloae had unusual structures riddled with large, complex, and dynamic repeat blocks. This feature was not reflective of overall differences in repeat contents in the genomes, nor was it characteristic of most other specialized metabolism loci. The organization and dynamics of alkaloid loci and abundant repeat blocks in the epichloae suggested that these fungi are under selection for alkaloid diversification. We suggest that such selection is related to the variable life histories of the epichloae, their protective roles as symbionts, and their associations with the highly speciose and ecologically diverse cool-season grasses.
Resumo:
The worldwide spread of barley cultivation required adaptation to agricultural environments far distant from those found in its centre of domestication. An important component of this adaptation is the timing of flowering, achieved predominantly in response to day length and temperature. Here, we use a collection of cultivars, landraces and wild barley accessions to investigate the origins and distribution of allelic diversity at four major flowering time loci, mutations at which have been under selection during the spread of barley cultivation into Europe. Our findings suggest that while mutant alleles at the PPD-H1 and PPD-H2 photoperiod loci occurred pre-domestication, the mutant vernalization non-responsive alleles utilized in landraces and cultivars at the VRN-H1 and VRN-H2 loci occurred post-domestication. The transition from wild to cultivated barley is associated with a doubling in the number of observed multi-locus flowering-time haplotypes, suggesting that the resulting phenotypic variation has aided adaptation to cultivation in the diverse ecogeographic locations encountered. Despite the importance of early-flowering alleles during the domestication of barley in Europe, we show that novel VRN alleles associated with early flowering in wild barley have been lost in domesticates, highlighting the potential of wild germplasm as a source of novel allelic variation for agronomic traits.
Resumo:
Winter storms of the midlatitudes are an important factor for property losses caused by natural hazards over Europe. The storm series in early 1990 and late 1999 led to enormous economic damages and insured claims. Although significant trends in North Atlantic/European storm activity have not been identified for the last few decades, recent studies provide evidence that under anthropogenic climate change the number of extreme storms could increase, whereas the total number of cyclones may be slightly reduced. In this study, loss potentials derived from an ensemble of climate models using a simple storm damage model under climate change conditions are shown. For the United Kingdom and Germany ensemble-mean storm-related losses are found to increase by up to 37%. Furthermore, the interannual variability of extreme events will increase leading to a higher risk of extreme storm activity and related losses.
Resumo:
We use a state-of-the-art ocean general circulation and biogeochemistry model to examine the impact of changes in ocean circulation and biogeochemistry in governing the change in ocean carbon-13 and atmospheric CO2 at the last glacial maximum (LGM). We examine 5 different realisations of the ocean's overturning circulation produced by a fully coupled atmosphere-ocean model under LGM forcing and suggested changes in the atmospheric deposition of iron and phytoplankton physiology at the LGM. Measured changes in carbon-13 and carbon-14, as well as a qualitative reconstruction of the change in ocean carbon export are used to evaluate the results. Overall, we find that while a reduction in ocean ventilation at the LGM is necessary to reproduce carbon-13 and carbon-14 observations, this circulation results in a low net sink for atmospheric CO2. In contrast, while biogeochemical processes contribute little to carbon isotopes, we propose that most of the change in atmospheric CO2 was due to such factors. However, the lesser role for circulation means that when all plausible factors are accounted for, most of the necessary CO2 change remains to be explained. This presents a serious challenge to our understanding of the mechanisms behind changes in the global carbon cycle during the geologic past.
Resumo:
A set of high-resolution radar observations of convective storms has been collected to evaluate such storms in the UK Met Office Unified Model during the DYMECS project (Dynamical and Microphysical Evolution of Convective Storms). The 3-GHz Chilbolton Advanced Meteorological Radar was set up with a scan-scheduling algorithm to automatically track convective storms identified in real-time from the operational rainfall radar network. More than 1,000 storm observations gathered over fifteen days in 2011 and 2012 are used to evaluate the model under various synoptic conditions supporting convection. In terms of the detailed three-dimensional morphology, storms in the 1500-m grid-length simulations are shown to produce horizontal structures a factor 1.5–2 wider compared to radar observations. A set of nested model runs at grid lengths down to 100m show that the models converge in terms of storm width, but the storm structures in the simulations with the smallest grid lengths are too narrow and too intense compared to the radar observations. The modelled storms were surrounded by a region of drizzle without ice reflectivities above 0 dBZ aloft, which was related to the dominance of ice crystals and was improved by allowing only aggregates as an ice particle habit. Simulations with graupel outperformed the standard configuration for heavy-rain profiles, but the storm structures were a factor 2 too wide and the convective cores 2 km too deep.
Resumo:
Toxic or allelopathic compounds liberated by toxin-producing phytoplankton (TPP) acts as a strong mediator in plankton dynamics. On an analysis of a set of phytoplankton biomass data that have been collected by our group in the northwest part of the Bay of Bengal, and by analysis of a three-component mathematical model under a constant as well as a stochastic environment, we explore the role of toxin-allelopathy in determining the dynamic behavior of the competing phytoplankton species. The overall results, based on analytical and numerical wings, demonstrate that toxin-allelopathy due to the TPP promotes a stable co-existence of those competitive phytoplankton that would otherwise exhibit competitive exclusion of the weak species. Our study suggests that TPP might be a potential candidate for maintaining the co-existence and diversity of competing phytoplankton species.
Resumo:
An efficient two-level model identification method aiming at maximising a model׳s generalisation capability is proposed for a large class of linear-in-the-parameters models from the observational data. A new elastic net orthogonal forward regression (ENOFR) algorithm is employed at the lower level to carry out simultaneous model selection and elastic net parameter estimation. The two regularisation parameters in the elastic net are optimised using a particle swarm optimisation (PSO) algorithm at the upper level by minimising the leave one out (LOO) mean square error (LOOMSE). There are two elements of original contributions. Firstly an elastic net cost function is defined and applied based on orthogonal decomposition, which facilitates the automatic model structure selection process with no need of using a predetermined error tolerance to terminate the forward selection process. Secondly it is shown that the LOOMSE based on the resultant ENOFR models can be analytically computed without actually splitting the data set, and the associate computation cost is small due to the ENOFR procedure. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.
Resumo:
An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.
Resumo:
In this work I analyze the model proposed by Goldfajn (2000) to study the choice of the denomination of the public debt. The main purpose of the analysis is pointing out possible reasons why new empirical evidence provided by Bevilaqua, Garcia and Nechio (2004), regarding a more recent time period, Önds a lower empirical support to the model. I also provide a measure of the overestimation of the welfare gains of hedging the debt led by the simpliÖed time frame of the model. Assuming a time-preference parameter of 0.9, for instance, welfare gains associated with a hedge to the debt that reduces to a half a once-for-all 20%-of-GDP shock to government spending run around 1.43% of GDP under the no-tax-smoothing structure of the model. Under a Ramsey allocation, though, welfare gains amount to just around 0.05% of GDP.
Resumo:
O presente estudo procurou descrever e analisar o contexto em que se desenvolveu o processo de concessão dos sistemas de transporte de massa na Região Metropolitana do Rio de Janeiro, promovido pelo Programa Estadual de Desestatização ¿ PED, na gestão governamental compreendida entre os anos de 1995 e 1998, bem como avaliar suas implicações sobre o modelo de organização e gestão do transporte público regional então vigente. Seu desenvolvimento enfatizou três aspectos desse processo: a caracterização do cenário anterior à proposta de mudança, a análise substantiva da política representada pelo programa de concessões e a avaliação do novo cenário criado como conseqüência do programa. Sua metodologia pautou-se em consulta bibliográfica, volumosa análise documental, observação dos fatos e entrevistas desestruturadas com administradores e técnicos envolvidos no processo. Seus resultados evidenciaram as limitações dos modelos de análise e de planejamento tradicionalmente adotados para a formulação das políticas setoriais, a precariedade dos sistemas de transporte de passageiros regionais e a situação pelos sistemas de metrô, trens e barcas, consubstanciando um ambiente propício às propostas de sua transferência à gestão privada. Evidenciaram, ainda, que a iniciativa foi influenciada pelo contexto dos projetos de reforma do Estado patrocinados pelo Banco Mundial (BIRD), desenvolvendo-se sem referências relevantes na comunidade técnica setorial e gerando um cenário institucional frágil diante da tarefa de gerir os contratos dela resultantes. Embora pautado em estratégias de retomada de investimentos condizentes com as diretrizes do Plano de Transporte de Massa ¿ PTM, elaborado em 1994, a insipiência do programa não permite constatar, ainda tendências significativas no desempenho dos sistemas concedidos. São evidentes, entretanto, seus reflexos na desentruturação do modelo de gestão pública do transporte metropolitano sob responsabilidade do Estado.
Resumo:
O objetivo desta dissertação é analisar as regras de condução da política monetária em modelos em que os agentes formam suas expectativas de forma racional (forward looking models), no contexto do regime de metas de inflação. As soluções ótimas de pré - comprometimento e discricionária são derivadas e aplicadas a um modelo macroeconômico para a economia brasileira e os resultados são também comparados com os obtidos pela adoção da regra de Taylor. A análise do comportamento do modelo sob diferentes regras é feita através da construção da fronteira do trede-oit da variância do hiato do produto e da inflação e da análise dinâmica frente a ocorrência de choques. A discussão referente à análise dinâmica do modelo é estendida para o caso onde a persistência dos choques é variada.
Resumo:
In this work I analyze the model proposed by Goldfajn (2000) to study the choice of the denomination of the public debt. Some potential shortcmomings of the mo dei in explaining the data are discussed. Measures of the overestimation of the welfare gains of reducing distortions from taxation, under the model's simplified time frame, are also provided. Assuming a time-preference parameter of 0.9, for instance, welfare gains associated with a hedge to the debt that reduces to half a once-for-all 20o/o-of-GDP shock to governemnt spending run around 1.43% of GDP under the no-tax-smoothing structure of the model. Under a Ramsey allocation, though, welfare gains amount to just around 0.05% of GDP.
Resumo:
Based on three versions of a small macroeconomic model for Brazil, this paper presents empirical evidence on the effects of parameter uncertainty on monetary policy rules and on the robustness of optimal and simple rules over different model specifications. By comparing the optimal policy rule under parameter uncertainty with the rule calculated under purely additive uncertainty, we find that parameter uncertainty should make policymakers react less aggressively to the economy's state variables, as suggested by Brainard's "conservatism principIe", although this effect seems to be relatively small. We then informally investigate each rule's robustness by analyzing the performance of policy rules derived from each model under each one of the alternative models. We find that optimal rules derived from each model perform very poorly under alternative models, whereas a simple Taylor rule is relatively robusto We also fmd that even within a specific model, the Taylor rule may perform better than the optimal rule under particularly unfavorable realizations from the policymaker' s loss distribution function.
Resumo:
The discussions wherein develop proposals for university reform in Brazil include, among other things, the conception of the university titled "New University", whose structural origin comes from the bill of higher education reform and unification of the foundations of education European upper (Bologna process). At its core, the Bologna process has imposed a series of transformations, among which, the promotion of mobility, as a stimulus to interinstitutional cooperation to enable an better and bigger qualification of the students. Nevertheless, what we see is that this point is one of the main points made flawed by Brazilian institutions that have adopted this model of higher education. An example is the Bachelor of Science and Technology - BC&T, Federal University of Rio Grande do Norte - UFRN, where there are problems of the internal order, represented by the problem of the reusing of the disciplines, such also of external order, in cases of transfers interinstitutional. Because of this, and knowing that this is a typical problem in which multiple criteria are involved, the aim of this study is to propose a multicriteria model for selection of interciclo of the BC&T of the UFRN which addresses the issue of mobility. For this, this study was of exploratory and study case nature, use as tools of data collection, the bibliographic and documentary research, as well as semi-structured interviews. For the elaboration of the model, were used the five phases most commonly used in the modeling of problems in operational research in a sample of 91 students of BC&T. As a result, we obtained a model that addresses the issue of internal and external mobility of the school and that, moreover, was also more robust and fair than the current model of BC&T and also what is used in other courses of the UFRN, taking into consideration the expected results by the decision makers
Resumo:
The method "toe-to-heel air injection" (THAITM) is a process of enhanced oil recovery, which is the integration of in-situ combustion with technological advances in drilling horizontal wells. This method uses horizontal wells as producers of oil, keeping vertical injection wells to inject air. This process has not yet been applied in Brazil, making it necessary, evaluation of these new technologies applied to local realities, therefore, this study aimed to perform a parametric study of the combustion process with in-situ oil production in horizontal wells, using a semi synthetic reservoir, with characteristics of the Brazilian Northeast basin. The simulations were performed in a commercial software "STARS" (Steam, Thermal, and Advanced Processes Reservoir Simulator), from CMG (Computer Modelling Group). The following operating parameters were analyzed: air rate, configuration of producer wells and oxygen concentration. A sensitivity study on cumulative oil (Np) was performed with the technique of experimental design, with a mixed model of two and three levels (32x22), a total of 36 runs. Also, it was done a technical economic estimative for each model of fluid. The results showed that injection rate was the most influence parameter on oil recovery, for both studied models, well arrangement depends on fluid model, and oxygen concentration favors recovery oil. The process can be profitable depends on air rate