948 resultados para Random Coefficient Autoregressive Model{ RCAR (1)}
Resumo:
Refractivity changes (ΔN) derived from radar ground clutter returns serve as a proxy for near-surface humidity changes (1 N unit ≡ 1% relative humidity at 20 °C). Previous studies have indicated that better humidity observations should improve forecasts of convection initiation. A preliminary assessment of the potential of refractivity retrievals from an operational magnetron-based C-band radar is presented. The increased phase noise at shorter wavelengths, exacerbated by the unknown position of the target within the 300 m gate, make it difficult to obtain absolute refractivity values, so we consider the information in 1 h changes. These have been derived to a range of 30 km with a spatial resolution of ∼4 km; the consistency of the individual estimates (within each 4 km × 4 km area) indicates that ΔN errors are about 1 N unit, in agreement with in situ observations. Measurements from an instrumented tower on summer days show that the 1 h refractivity changes up to a height of 100 m remain well correlated with near-surface values. The analysis of refractivity as represented in the operational Met Office Unified Model at 1.5, 4 and 12 km grid lengths demonstrates that, as model resolution increases, the spatial scales of the refractivity structures improve. It is shown that the magnitude of refractivity changes is progressively underestimated at larger grid lengths during summer. However, the daily time series of 1 h refractivity changes reveal that, whereas the radar-derived values are very well correlated with the in situ observations, the high-resolution model runs have little skill in getting the right values of ΔN in the right place at the right time. This suggests that the assimilation of these radar refractivity observations could benefit forecasts of the initiation of convection.
Resumo:
14C-dated pollen and lake-level data from Europe are used to assess the spatial patterns of climate change between 6000 yr BP and present, as simulated by the NCAR CCM1 (National Center for Atmospheric Research, Community Climate Model, version 1) in response to the change in the Earth’s orbital parameters during this perod. First, reconstructed 6000 yr BP values of bioclimate variables obtained from pollen and lake-level data with the constrained-analogue technique are compared with simulated values. Then a 6000 yr BP biome map obtained from pollen data with an objective biome reconstruction (biomization) technique is compared with BIOME model results derived from the same simulation. Data and simulations agree in some features: warmer-than-present growing seasons in N and C Europe allowed forests to extend further north and to higher elevations than today, and warmer winters in C and E Europe prevented boreal conifers from spreading west. More generally, however, the agreement is poor. Predominantly deciduous forest types in Fennoscandia imply warmer winters than the model allows. The model fails to simulate winters cold enough, or summers wet enough, to allow temperate deciduous forests their former extended distribution in S Europe, and it incorrectly simulates a much expanded area of steppe vegetation in SE Europe. Similar errors have also been noted in numerous 6000 yr BP simulations with prescribed modern sea surface temperatures. These errors are evidently not resolved by the inclusion of interactive sea-surface conditions in the CCM1. Accurate representation of mid-Holocene climates in Europe may require the inclusion of dynamical ocean–atmosphere and/or vegetation–atmosphere interactions that most palaeoclimate model simulations have so far disregarded.
Resumo:
A one-dimensional surface energy-balance lake model, coupled to a thermodynamic model of lake ice, is used to simulate variations in the temperature of and evaporation from three Estonian lakes: Karujärv, Viljandi and Kirjaku. The model is driven by daily climate data, derived by cubic-spline interpolation from monthly mean data, and was run for periods of 8 years (Kirjaku) up to 30 years (Viljandi). Simulated surface water temperature is in good agreement with observations: mean differences between simulated and observed temperatures are from −0.8°C to +0.1°C. The simulated duration of snow and ice cover is comparable with observed. However, the model generally underpredicts ice thickness and overpredicts snow depth. Sensitivity analyses suggest that the model results are robust across a wide range (0.1–2.0 m−1) of lake extinction coefficient: surface temperature differs by less than 0.5°C between extreme values of the extinction coefficient. The model results are more sensitive to snow and ice albedos. However, changing the snow (0.2–0.9) and ice (0.15–0.55) albedos within realistic ranges does not improve the simulations of snow depth and ice thickness. The underestimation of ice thickness is correlated with the overestimation of snow cover, since a thick snow layer insulates the ice and limits ice formation. The overestimation of snow cover results from the assumption that all the simulated winter precipitation occurs as snow, a direct consequence of using daily climate data derived by interpolation from mean monthly data.
Resumo:
COCO-2 is a model for assessing the potential economic costs likely to arise off-site following an accident at a nuclear reactor. COCO-2 builds on work presented in the model COCO-1 developed in 1991 by considering economic effects in more detail, and by including more sources of loss. Of particular note are: the consideration of the directly affected local economy, indirect losses that stem from the directly affected businesses, losses due to changes in tourism consumption, integration with the large body of work on recovery after an accident and a more systematic approach to health costs. The work, where possible, is based on official data sources for reasons of traceability, maintenance and ease of future development. This report describes the methodology and discusses the results of an example calculation. Guidance on how the base economic data can be updated in the future is also provided.
Resumo:
Catalysts containing NiO/MgO/ZrO(2) mixtures were synthesized by the polymerization method in a single step. They were characterized by X-ray diffraction (XRD), temperature programmed reduction (TPR) and physisorption of N(2) (BET) and then tested in the reforming of a model biogas (1.5CH4:1CO(2)) in the presence of air (1.5CH(4) + 1CO(2) + 0.25O(2)) at 750 degrees C for 6h. It was observed that the catalyst Ni20MZ performed better in catalytic processes than the well known catalysts, Ni/ZrO(2) and Ni/MgO, synthesized under the same conditions. The formation of solid solutions, MgO-ZrO(2) and NiO-MgO, increased the rate of conversion of reactants (CH(4) and CO(2)) into synthesis gas (H(2) + CO). The formation of oxygen vacancies (in samples containing ZrO(2) and MgO) seems to promote removal of the coke deposited on the nickel surface. The values of the H(2)/CO ratio were generally found to be slightly lower than stoichiometric, owing to the reverse water gas shift reaction occurring in parallel. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Objective: Turnover of the extracellular matrix in all solid organs is governed mainly by a balance between the degrading matrix metalloproteinases (MMPs) and their tissue inhibitors (TIMPs). An altered extracellular matrix metabolism has been implicated in a variety of diseases. We investigated relations of serum levels of MMP-9 and TIMP-1 to mortality risk from an etiological perspective. Design: The prospective Uppsala Longitudinal Study of Adult Men (ULSAM) cohort, followed from 1991–1995 for up to 18.1 years. A random population-based sample of 1,082 71-year-old men, no loss to follow-up. Endpoints were all-cause (n = 628), cardiovascular (n = 230), non-cardiovascular (n = 398) and cancer mortality (n = 178), and fatal or non-fatal myocardial infarction (n = 138) or stroke (n = 163). Results: Serum MMP-9 and TIMP-1 levels were associated with risk of all-cause mortality (Cox proportional hazard ratio [HR] per standard deviation 1.10, 95% confidence interval [CI] 1.03–1.19; and 1.11, 1.02–1.20; respectively). TIMP-1 levels were mainly related to risks of cardiovascular mortality and stroke (HR per standard deviation 1.22, 95% CI 1.09–1.37; and 1.18, 1.04–1.35; respectively). All relations except those of TIMP-1 to stroke risk were attenuated by adjustment for cardiovascular disease risk factors. Relations in a subsample without cardiovascular disease or cancer were similar to those in the total sample. Conclusion: In this community-based cohort of elderly men, serum MMP-9 and TIMP-1 levels were related to mortality risk. An altered extracellular matrix metabolism may be involved in several detrimental pathways, and circulating MMP-9 or TIMP-1 levels may be relevant markers thereof.
Resumo:
Gibrat's law predicts that firm growth is purely random and should be independent of firm size. We use a random effects-random coefficient model to test whether Gibrat's law holds on average in the studied sample as well as at the individual firm level in the Swedish energy market. No study has yet investigated whether Gibrat's law holds for individual firms, previous studies having instead estimated whether the law holds on average in the samples studied. The present results support the claim that Gibrat's law is more likely to be rejected ex ante when an entire firm population is considered, but more likely to be confirmed ex post after market selection has "cleaned" the original population of firms or when the analysis treats more disaggregated data. From a theoretical perspective, the results are consistent with models based on passive and active learning, indicating a steady state in the firm expansion process and that Gibrat's law is violated in the short term but holds in the long term once firms have reached a steady state. These results indicate that approximately 70 % of firms in the Swedish energy sector are in steady state, with only random fluctuations in size around that level over the 15 studied years.
Resumo:
This paper contributes to the debate on whether the Brazilian public debt is sustainable or not in the long run by considering threshold effects on the Brazilian Budget Deficit. Using data from 1947 to 1999 and a threshold autoregressive model, we find evidence of delays in fiscal stabilization. As suggested in Alesina (1991), delayed stabilizations reflect the existence of political constraints blocking deficit cuts, which are relaxed only when the budget deficit reaches a sufficiently high level, deemed to be unsustainable. In particular, our results suggest that, in the absence of seignorage, only when the increase in the budget deficit reaches 1.74% of the GDP will fiscal authorities intervene to reduce the deficit. If seignorage is allowed, the threshold increases to 2.2%, suggesting that seignorage makes government more tolerant to fiscal imbalances.
Resumo:
Este trabalho tem o objetivo de testar a qualidade preditiva do Modelo Vasicek de dois fatores acoplado ao Filtro de Kalman. Aplicado a uma estratégia de investimento, incluímos um critério de Stop Loss nos períodos que o modelo não responde de forma satisfatória ao movimento das taxas de juros. Utilizando contratos futuros de DI disponíveis na BMFBovespa entre 01 de março de 2007 a 30 de maio de 2014, as simulações foram realizadas em diferentes momentos de mercado, verificando qual a melhor janela para obtenção dos parâmetros dos modelos, e por quanto tempo esses parâmetros estimam de maneira ótima o comportamento das taxas de juros. Os resultados foram comparados com os obtidos pelo Modelo Vetor-auto regressivo de ordem 1, e constatou-se que o Filtro de Kalman aplicado ao Modelo Vasicek de dois fatores não é o mais indicado para estudos relacionados a previsão das taxas de juros. As limitações desse modelo o restringe em conseguir estimar toda a curva de juros de uma só vez denegrindo seus resultados.
Resumo:
Foram estimados parâmetros genéticos para pesos do nascimento aos 570 dias de idade para 35.308 animais da raça Tabapuã, nascidos entre 1975 e 2000, pertencentes à ABCZ (Associação Brasileira de Criadores de Zebu) sob três modelos distintos. O modelo 1 incluiu o efeito genético aditivo direto como aleatório, além dos efeitos fixos de grupo de contemporâneos, definido pelas variáveis: proprietário, rebanho, criador, rebanho do criador, sexo, condição de criação, ano e mês de nascimento, ano e mês da pesagem, e os efeitos linear e quadrático de idade do bezerro e idade da vaca ao parto como covariáveis. O modelo 2 compreendeu, além dos efeitos supracitados, o efeito de ambiente permanente materno. O modelo 3 constou dos efeitos genético aditivo direto e materno e de ambiente permanente materno (aleatórios) e os mesmos incluídos no modelo 1 (fixos). de acordo com o teste de razão de verossimilhança, o modelo 3 foi o mais adequado para ajustar os efeitos estudados. As estimativas de herdabilidade direta foram baixas a moderadas (0,08 a 0,26), decrescendo do nascimento às idades subseqüentes, com picos aos 90 e aos 180 dias de idade. Aos 345 dias de idade, ocorreu novo aumento nas estimativas, com menor oscilação entre as estimativas subseqüentes até 570 dias de idade. As estimativas de herdabilidade materna foram baixas, sendo maiores no período da desmama.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This letter describes a novel algorithm that is based on autoregressive decomposition and pole tracking used to recognize two patterns of speech data: normal voice and disphonic voice caused by nodules. The presented method relates the poles and the peaks of the signal spectrum which represent the periodic components of the voice. The results show that the perturbation contained in the signal is clearly depicted by pole's positions. Their variability is related to jitter and shimmer. The pole dispersion for pathological voices is about 20% higher than for normal voices, therefore, the proposed approach is a more trustworthy measure than the classical ones. © 2007.