971 resultados para Model error
Resumo:
As climate changes, temperatures will play an increasing role in determining crop yield. Both climate model error and lack of constrained physiological thresholds limit the predictability of yield. We used a perturbed-parameter climate model ensemble with two methods of bias-correction as input to a regional-scale wheat simulation model over India to examine future yields. This model configuration accounted for uncertainty in climate, planting date, optimization, temperature-induced changes in development rate and reproduction. It also accounts for lethal temperatures, which have been somewhat neglected to date. Using uncertainty decomposition, we found that fractional uncertainty due to temperature-driven processes in the crop model was on average larger than climate model uncertainty (0.56 versus 0.44), and that the crop model uncertainty is dominated by crop development. Simulations with the raw compared to the bias-corrected climate data did not agree on the impact on future wheat yield, nor its geographical distribution. However the method of bias-correction was not an important source of uncertainty. We conclude that bias-correction of climate model data and improved constraints on especially crop development are critical for robust impact predictions.
Resumo:
The evidence for anthropogenic climate change continues to strengthen, and concerns about severe weather events are increasing. As a result, scientific interest is rapidly shifting from detection and attribution of global climate change to prediction of its impacts at the regional scale. However, nearly everything we have any confidence in when it comes to climate change is related to global patterns of surface temperature, which are primarily controlled by thermodynamics. In contrast, we have much less confidence in atmospheric circulation aspects of climate change, which are primarily controlled by dynamics and exert a strong control on regional climate. Model projections of circulation-related fields, including precipitation, show a wide range of possible outcomes, even on centennial timescales. Sources of uncertainty include low-frequency chaotic variability and the sensitivity to model error of the circulation response to climate forcing. As the circulation response to external forcing appears to project strongly onto existing patterns of variability, knowledge of errors in the dynamics of variability may provide some constraints on model projections. Nevertheless, higher scientific confidence in circulation-related aspects of climate change will be difficult to obtain. For effective decision-making, it is necessary to move to a more explicitly probabilistic, risk-based approach.
Resumo:
Stochastic methods are a crucial area in contemporary climate research and are increasingly being used in comprehensive weather and climate prediction models as well as reduced order climate models. Stochastic methods are used as subgrid-scale parameterizations (SSPs) as well as for model error representation, uncertainty quantification, data assimilation, and ensemble prediction. The need to use stochastic approaches in weather and climate models arises because we still cannot resolve all necessary processes and scales in comprehensive numerical weather and climate prediction models. In many practical applications one is mainly interested in the largest and potentially predictable scales and not necessarily in the small and fast scales. For instance, reduced order models can simulate and predict large-scale modes. Statistical mechanics and dynamical systems theory suggest that in reduced order models the impact of unresolved degrees of freedom can be represented by suitable combinations of deterministic and stochastic components and non-Markovian (memory) terms. Stochastic approaches in numerical weather and climate prediction models also lead to the reduction of model biases. Hence, there is a clear need for systematic stochastic approaches in weather and climate modeling. In this review, we present evidence for stochastic effects in laboratory experiments. Then we provide an overview of stochastic climate theory from an applied mathematics perspective. We also survey the current use of stochastic methods in comprehensive weather and climate prediction models and show that stochastic parameterizations have the potential to remedy many of the current biases in these comprehensive models.
Resumo:
In general, particle filters need large numbers of model runs in order to avoid filter degeneracy in high-dimensional systems. The recently proposed, fully nonlinear equivalent-weights particle filter overcomes this requirement by replacing the standard model transition density with two different proposal transition densities. The first proposal density is used to relax all particles towards the high-probability regions of state space as defined by the observations. The crucial second proposal density is then used to ensure that the majority of particles have equivalent weights at observation time. Here, the performance of the scheme in a high, 65 500 dimensional, simplified ocean model is explored. The success of the equivalent-weights particle filter in matching the true model state is shown using the mean of just 32 particles in twin experiments. It is of particular significance that this remains true even as the number and spatial variability of the observations are changed. The results from rank histograms are less easy to interpret and can be influenced considerably by the parameter values used. This article also explores the sensitivity of the performance of the scheme to the chosen parameter values and the effect of using different model error parameters in the truth compared with the ensemble model runs.
Resumo:
4-Dimensional Variational Data Assimilation (4DVAR) assimilates observations through the minimisation of a least-squares objective function, which is constrained by the model flow. We refer to 4DVAR as strong-constraint 4DVAR (sc4DVAR) in this thesis as it assumes the model is perfect. Relaxing this assumption gives rise to weak-constraint 4DVAR (wc4DVAR), leading to a different minimisation problem with more degrees of freedom. We consider two wc4DVAR formulations in this thesis, the model error formulation and state estimation formulation. The 4DVAR objective function is traditionally solved using gradient-based iterative methods. The principle method used in Numerical Weather Prediction today is the Gauss-Newton approach. This method introduces a linearised `inner-loop' objective function, which upon convergence, updates the solution of the non-linear `outer-loop' objective function. This requires many evaluations of the objective function and its gradient, which emphasises the importance of the Hessian. The eigenvalues and eigenvectors of the Hessian provide insight into the degree of convexity of the objective function, while also indicating the difficulty one may encounter while iterative solving 4DVAR. The condition number of the Hessian is an appropriate measure for the sensitivity of the problem to input data. The condition number can also indicate the rate of convergence and solution accuracy of the minimisation algorithm. This thesis investigates the sensitivity of the solution process minimising both wc4DVAR objective functions to the internal assimilation parameters composing the problem. We gain insight into these sensitivities by bounding the condition number of the Hessians of both objective functions. We also precondition the model error objective function and show improved convergence. We show that both formulations' sensitivities are related to error variance balance, assimilation window length and correlation length-scales using the bounds. We further demonstrate this through numerical experiments on the condition number and data assimilation experiments using linear and non-linear chaotic toy models.
Resumo:
In numerical weather prediction, parameterisations are used to simulate missing physics in the model. These can be due to a lack of scientific understanding or a lack of computing power available to address all the known physical processes. Parameterisations are sources of large uncertainty in a model as parameter values used in these parameterisations cannot be measured directly and hence are often not well known; and the parameterisations themselves are also approximations of the processes present in the true atmosphere. Whilst there are many efficient and effective methods for combined state/parameter estimation in data assimilation (DA), such as state augmentation, these are not effective at estimating the structure of parameterisations. A new method of parameterisation estimation is proposed that uses sequential DA methods to estimate errors in the numerical models at each space-time point for each model equation. These errors are then fitted to pre-determined functional forms of missing physics or parameterisations that are based upon prior information. We applied the method to a one-dimensional advection model with additive model error, and it is shown that the method can accurately estimate parameterisations, with consistent error estimates. Furthermore, it is shown how the method depends on the quality of the DA results. The results indicate that this new method is a powerful tool in systematic model improvement.
Resumo:
The coexistence between different types of templates has been the choice solution to the information crisis of prebiotic evolution, triggered by the finding that a single RNA-like template cannot carry enough information to code for any useful replicase. In principle, confining d distinct templates of length L in a package or protocell, whose Survival depends on the coexistence of the templates it holds in, could resolve this crisis provided that d is made sufficiently large. Here we review the prototypical package model of Niesert et al. [1981. Origin of life between Scylla and Charybdis. J. Mol. Evol. 17, 348-353] which guarantees the greatest possible region of viability of the protocell population, and show that this model, and hence the entire package approach, does not resolve the information crisis. In particular, we show that the total information stored in a viable protocell (Ld) tends to a constant value that depends only on the spontaneous error rate per nucleotide of the template replication mechanism. As a result, an increase of d must be followed by a decrease of L, so that the net information gain is null. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
O presente trabalho apresenta os resultados da modelagem de canal de propagação baseado em séries temporais multivariadas com a utilização de dados coletados em campanhas de medição e as principais características da urbanização de onze vias do centro da cidade de Belém-Pa. Modelos de função de transferência foram utilizados para avaliar efeitos na série temporal da potência do sinal recebido (dBm) que foi utilizada como variável resposta e como variáveis explicativas a altura dos prédios e as distâncias entre os prédios. Como nos modelos em séries temporais desconsideram-se as possíveis correlações entre amostras vizinhas, utilizou-se um modelo geoestatístico para se estabelecer a correção do erro deste modelo. Esta fase do trabalho consistiu em um conjunto de procedimentos necessários às técnicas geoestatísticas. Tendo como objetivo a análise em duas dimensões para dados espacialmente distribuídos, no que diz respeito à interpolação de superfícies geradas a partir das mostras georreferenciadas obtidas dos resíduos da potência do sinal recebido calculados com o modelo em séries temporais. Os resultados obtidos com o modelo proposto apresentam um bom desempenho, com erro médio quadrático na ordem de 0,33 dB em relação ao sinal medido, considerando os dados das onze vias do centro urbano da cidade de Belém/Pa. A partir do mapa de distribuição espacial da potência do sinal recebido (dBm), pode se identificar com facilidade as zonas infra ou supra dimensionadas em termos desta variável, isto é, beneficiadas ou prejudicadas com relação a recepção do sinal, o que pode resultar em um maior investimento da operadora (concessionária de telefonia celular móvel) local naquelas regiões onde o sinal é fraco.
Resumo:
Brazilian design code ABNT NBR6118:2003 - Design of Concrete Structures - Procedures - [1] proposes the use of simplified models for the consideration of non-linear material behavior in the evaluation of horizontal displacements in buildings. These models penalize stiffness of columns and beams, representing the effects of concrete cracking and avoiding costly physical non-linear analyses. The objectives of the present paper are to investigate the accuracy and uncertainty of these simplified models, as well as to evaluate the reliabilities of structures designed following ABNT NBR6118:2003[1&] in the service limit state for horizontal displacements. Model error statistics are obtained from 42 representative plane frames. The reliabilities of three typical (4, 8 and 12 floor) buildings are evaluated, using the simplified models and a rigorous, physical and geometrical non-linear analysis. Results show that the 70/70 (column/beam stiffness reduction) model is more accurate and less conservative than the 80/40 model. Results also show that ABNT NBR6118:2003 [1] design criteria for horizontal displacement limit states (masonry damage according to ACI 435.3R-68(1984) [10]) are conservative, and result in reliability indexes which are larger than those recommended in EUROCODE [2] for irreversible service limit states.
Resumo:
The Assimilation in the Unstable Subspace (AUS) was introduced by Trevisan and Uboldi in 2004, and developed by Trevisan, Uboldi and Carrassi, to minimize the analysis and forecast errors by exploiting the flow-dependent instabilities of the forecast-analysis cycle system, which may be thought of as a system forced by observations. In the AUS scheme the assimilation is obtained by confining the analysis increment in the unstable subspace of the forecast-analysis cycle system so that it will have the same structure of the dominant instabilities of the system. The unstable subspace is estimated by Breeding on the Data Assimilation System (BDAS). AUS- BDAS has already been tested in realistic models and observational configurations, including a Quasi-Geostrophicmodel and a high dimensional, primitive equation ocean model; the experiments include both fixed and“adaptive”observations. In these contexts, the AUS-BDAS approach greatly reduces the analysis error, with reasonable computational costs for data assimilation with respect, for example, to a prohibitive full Extended Kalman Filter. This is a follow-up study in which we revisit the AUS-BDAS approach in the more basic, highly nonlinear Lorenz 1963 convective model. We run observation system simulation experiments in a perfect model setting, and with two types of model error as well: random and systematic. In the different configurations examined, and in a perfect model setting, AUS once again shows better efficiency than other advanced data assimilation schemes. In the present study, we develop an iterative scheme that leads to a significant improvement of the overall assimilation performance with respect also to standard AUS. In particular, it boosts the efficiency of regime’s changes tracking, with a low computational cost. Other data assimilation schemes need estimates of ad hoc parameters, which have to be tuned for the specific model at hand. In Numerical Weather Prediction models, tuning of parameters — and in particular an estimate of the model error covariance matrix — may turn out to be quite difficult. Our proposed approach, instead, may be easier to implement in operational models.
Resumo:
This research activity studied how the uncertainties are concerned and interrelated through the multi-model approach, since it seems to be the bigger challenge of ocean and weather forecasting. Moreover, we tried to reduce model error throughout the superensemble approach. In order to provide this aim, we created different dataset and by means of proper algorithms we obtained the superensamble estimate. We studied the sensitivity of this algorithm in function of its characteristics parameters. Clearly, it is not possible to evaluate a reasonable estimation of the error neglecting the importance of the grid size of ocean model, for the large amount of all the sub grid-phenomena embedded in space discretizations that can be only roughly parametrized instead of an explicit evaluation. For this reason we also developed a high resolution model, in order to calculate for the first time the impact of grid resolution on model error.
Resumo:
High-resolution reconstructions of climate variability that cover the past millennia are necessary to improve the understanding of natural and anthropogenic climate change across the globe. Although numerous records are available for the mid- and high-latitudes of the Northern Hemisphere, global assessments are still compromised by the scarcity of data from the Southern Hemisphere. This is particularly the case for the tropical and subtropical areas. In addition, high elevation sites in the South American Andes may provide insight into the vertical structure of climate change in the mid-troposphere. This study presents a 3000 yr-long austral summer (November to February) temperature reconstruction derived from the 210Pb- and 14C-dated organic sediments of Laguna Chepical (32°16' S, 70°30' W, 3050 m a.s.l.), a high-elevation glacial lake in the subtropical Andes of central Chile. Scanning reflectance spectroscopy in the visible light range provided the spectral index R570/R630, which reflects the clay mineral content in lake sediments. For the calibration period (AD 1901–2006), the R570/R630 data were regressed against monthly meteorological reanalysis data, showing that this proxy was strongly and significantly correlated with mean summer (NDJF) temperatures (R3 yr = −0.63, padj = 0.01). This calibration model was used to make a quantitative temperature reconstruction back to 1000 BC. The reconstruction (with a model error RMSEPboot of 0.33 °C) shows that the warmest decades of the past 3000 yr occurred during the calibration period. The 19th century (end of the Little Ice Age (LIA)) was cool. The prominent warmth reconstructed for the 18th century, which was also observed in other records from this area, seems systematic for subtropical and southern South America but remains difficult to explain. Except for this warm period, the LIA was generally characterized by cool summers. Back to AD 1400, the results from this study compare remarkably well to low altitude records from the Chilean Central Valley and southern South America. However, the reconstruction from Laguna Chepical does not show a warm Medieval Climate Anomaly during the 12–13th century, which is consistent with records from tropical South America. The Chepical record also indicates substantial cooling prior to 800 BC. This coincides with well-known regional as well as global glacier advances which have been attributed to a grand solar minimum. This study thus provides insight into the climatic drivers and temperature patterns in a region for which currently very few data are available. It also shows that since ca. AD 1400, long-term temperature patterns were generally similar at low and high altitudes in central Chile.
Resumo:
The principles of High Performance Liquid Chromatography (HPLC) and pharmacokinetics were applied to the use of several clinically-important drugs at the East Birmingham Hospital. Amongst these was gentamicin, which was investigated over a two-year period by a multi-disciplinary team. It was found that there was considerable intra- and inter-patient variation that had not previously been reported and the causes and consequences of such variation were considered. A detailed evaluation of available pharmacokinetic techniques was undertaken and 1- and 2-compartment models were optimised with regard to sampling procedures, analytical error and model-error. The implications for control of therapy are discussed and an improved sampling regime is proposed for routine usage. Similar techniques were applied to trimethoprim, assayed by HPLC, in patients with normal renal function and investigations were also commenced into the penetration of drug into peritoneal dialysate. Novel assay techniques were also developed for a range of drugs including 4-aminopyridine, chloramphenicol, metronidazole and a series of penicillins and cephalosporins. Stability studies on cysteamine, reaction-rate studies on creatinine-picrate and structure-activity relationships in HPLC of aminopyridines are also reported.
Resumo:
This paper presents a method for estimating the posterior probability density of the cointegrating rank of a multivariate error correction model. A second contribution is the careful elicitation of the prior for the cointegrating vectors derived from a prior on the cointegrating space. This prior obtains naturally from treating the cointegrating space as the parameter of interest in inference and overcomes problems previously encountered in Bayesian cointegration analysis. Using this new prior and Laplace approximation, an estimator for the posterior probability of the rank is given. The approach performs well compared with information criteria in Monte Carlo experiments. (C) 2003 Elsevier B.V. All rights reserved.