883 resultados para spatiotemporal epidemic prediction model
Resumo:
The Kalpana Very High Resolution Radiometer (VHRR) water vapour (WV) channel is very similar to the WV channel of the Meteosat Visible and Infrared Radiation Imager (MVIRI) on Meteosat-7, and both satellites observe the Indian subcontinent. Thus it is possible to compare the performance of VHRR and MVIRI in numerical weather prediction (NWP) models. In order to do so, the impact of Kalpana- and Meteosat-7-measured WV radiances was evaluated using analyses and forecasts of moisture, temperature, geopotential and winds, using the European Centre for Medium-range Weather Forecasts (ECMWF) NWP model. Compared with experiments using Meteosat-7, the experiments using Kalpana WV radiances show a similar fit to all observations and produce very similar forecasts.
Resumo:
The mechanisms involved in Atlantic meridional overturning circulation (AMOC) decadal variability and predictability over the last 50 years are analysed in the IPSL–CM5A–LR model using historical and initialised simulations. The initialisation procedure only uses nudging towards sea surface temperature anomalies with a physically based restoring coefficient. When compared to two independent AMOC reconstructions, both the historical and nudged ensemble simulations exhibit skill at reproducing AMOC variations from 1977 onwards, and in particular two maxima occurring respectively around 1978 and 1997. We argue that one source of skill is related to the large Mount Agung volcanic eruption starting in 1963, which reset an internal 20-year variability cycle in the North Atlantic in the model. This cycle involves the East Greenland Current intensity, and advection of active tracers along the subpolar gyre, which leads to an AMOC maximum around 15 years after the Mount Agung eruption. The 1997 maximum occurs approximately 20 years after the former one. The nudged simulations better reproduce this second maximum than the historical simulations. This is due to the initialisation of a cooling of the convection sites in the 1980s under the effect of a persistent North Atlantic oscillation (NAO) positive phase, a feature not captured in the historical simulations. Hence we argue that the 20-year cycle excited by the 1963 Mount Agung eruption together with the NAO forcing both contributed to the 1990s AMOC maximum. These results support the existence of a 20-year cycle in the North Atlantic in the observations. Hindcasts following the CMIP5 protocol are launched from a nudged simulation every 5 years for the 1960–2005 period. They exhibit significant correlation skill score as compared to an independent reconstruction of the AMOC from 4-year lead-time average. This encouraging result is accompanied by increased correlation skills in reproducing the observed 2-m air temperature in the bordering regions of the North Atlantic as compared to non-initialized simulations. To a lesser extent, predicted precipitation tends to correlate with the nudged simulation in the tropical Atlantic. We argue that this skill is due to the initialisation and predictability of the AMOC in the present prediction system. The mechanisms evidenced here support the idea of volcanic eruptions as a pacemaker for internal variability of the AMOC. Together with the existence of a 20-year cycle in the North Atlantic they propose a novel and complementary explanation for the AMOC variations over the last 50 years.
Resumo:
1. Nutrient concentrations (particularly N and P) determine the extent to which water bodies are or may become eutrophic. Direct determination of nutrient content on a wide scale is labour intensive but the main sources of N and P are well known. This paper describes and tests an export coefficient model for prediction of total N and total P from: (i) land use, stock headage and human population; (ii) the export rates of N and P from these sources; and (iii) the river discharge. Such a model might be used to forecast the effects of changes in land use in the future and to hindcast past water quality to establish comparative or baseline states for the monitoring of change. 2. The model has been calibrated against observed data for 1988 and validated against sets of observed data for a sequence of earlier years in ten British catchments varying from uplands through rolling, fertile lowlands to the flat topography of East Anglia. 3. The model predicted total N and total P concentrations with high precision (95% of the variance in observed data explained). It has been used in two forms: the first on a specific catchment basis; the second for a larger natural region which contains the catchment with the assumption that all catchments within that region will be similar. Both models gave similar results with little loss of precision in the latter case. This implies that it will be possible to describe the overall pattern of nutrient export in the UK with only a fraction of the effort needed to carry out the calculations for each individual water body. 4. Comparison between land use, stock headage, population numbers and nutrient export for the ten catchments in the pre-war year of 1931, and for 1970 and 1988 show that there has been a substantial loss of rough grazing to fertilized temporary and permanent grasslands, an increase in the hectarage devoted to arable, consistent increases in the stocking of cattle and sheep and a marked movement of humans to these rural catchments. 5. All of these trends have increased the flows of nutrients with more than a doubling of both total N and total P loads during the period. On average in these rural catchments, stock wastes have been the greatest contributors to both N and P exports, with cultivation the next most important source of N and people of P. Ratios of N to P were high in 1931 and remain little changed so that, in these catchments, phosphorus continues to be the nutrient most likely to control algal crops in standing waters supplied by the rivers studied.
Resumo:
We show that the four-dimensional variational data assimilation method (4DVar) can be interpreted as a form of Tikhonov regularization, a very familiar method for solving ill-posed inverse problems. It is known from image restoration problems that L1-norm penalty regularization recovers sharp edges in the image more accurately than Tikhonov, or L2-norm, penalty regularization. We apply this idea from stationary inverse problems to 4DVar, a dynamical inverse problem, and give examples for an L1-norm penalty approach and a mixed total variation (TV) L1–L2-norm penalty approach. For problems with model error where sharp fronts are present and the background and observation error covariances are known, the mixed TV L1–L2-norm penalty performs better than either the L1-norm method or the strong constraint 4DVar (L2-norm)method. A strength of the mixed TV L1–L2-norm regularization is that in the case where a simplified form of the background error covariance matrix is used it produces a much more accurate analysis than 4DVar. The method thus has the potential in numerical weather prediction to overcome operational problems with poorly tuned background error covariance matrices.
Resumo:
Some of the techniques used to model nitrogen (N) and phosphorus (P) discharges from a terrestrial catchment to an estuary are discussed and applied to the River Tamar and Tamar Estuary system in Southwest England, U.K. Data are presented for dissolved inorganic nutrient concentrations in the Tamar Estuary and compared with those from the contrasting, low turbidity and rapidly flushed Tweed Estuary in Northeast England. In the Tamar catchment, simulations showed that effluent nitrate loads for typical freshwater flows contributed less than 1% of the total N load. The effect of effluent inputs on ammonium loads was more significant (∼10%). Cattle, sheep and permanent grassland dominated the N catchment export, with diffuse-source N export greatly dominating that due to point sources. Cattle, sheep, permanent grassland and cereal crops generated the greatest rates of diffuse-source P export. This reflected the higher rates of P fertiliser applications to arable land and the susceptibility of bare, arable land to P export in wetter winter months. N and P export to the Tamar Estuary from human sewage was insignificant. Non-conservative behaviour of phosphate was particularly marked in the Tamar Estuary. Silicate concentrations were slightly less than conservative levels, whereas nitrate was essentially conservative. The coastal sea acted as a sink for these terrestrially derived nutrients. A pronounced sag in dissolved oxygen that was associated with strong nitrite and ammonium peaks occurred in the turbidity maximum region of the Tamar Estuary. Nutrient behaviour within the Tweed was very different. The low turbidity and rapid flushing ensured that nutrients there were essentially conservative, so that flushing of nutrients to the coastal zone from the river occurred with little estuarine modification.
Resumo:
The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.
Resumo:
We present the first climate prediction of the coming decade made with multiple models, initialized with prior observations. This prediction accrues from an international activity to exchange decadal predictions in near real-time, in order to assess differences and similarities, provide a consensus view to prevent over-confidence in forecasts from any single model, and establish current collective capability. We stress that the forecast is experimental, since the skill of the multi-model system is as yet unknown. Nevertheless, the forecast systems used here are based on models that have undergone rigorous evaluation and individually have been evaluated for forecast skill. Moreover, it is important to publish forecasts to enable open evaluation, and to provide a focus on climate change in the coming decade. Initialized forecasts of the year 2011 agree well with observations, with a pattern correlation of 0.62 compared to 0.31 for uninitialized projections. In particular, the forecast correctly predicted La Niña in the Pacific, and warm conditions in the north Atlantic and USA. A similar pattern is predicted for 2012 but with a weaker La Niña. Indices of Atlantic multi-decadal variability and Pacific decadal variability show no signal beyond climatology after 2015, while temperature in the Niño3 region is predicted to warm slightly by about 0.5 °C over the coming decade. However, uncertainties are large for individual years and initialization has little impact beyond the first 4 years in most regions. Relative to uninitialized forecasts, initialized forecasts are significantly warmer in the north Atlantic sub-polar gyre and cooler in the north Pacific throughout the decade. They are also significantly cooler in the global average and over most land and ocean regions out to several years ahead. However, in the absence of volcanic eruptions, global temperature is predicted to continue to rise, with each year from 2013 onwards having a 50 % chance of exceeding the current observed record. Verification of these forecasts will provide an important opportunity to test the performance of models and our understanding and knowledge of the drivers of climate change.
Resumo:
The wood mouse is a common and abundant species in agricultural landscape and is a focal species in pesticide risk assessment. Empirical studies on the ecology of the wood mouse have provided sufficient information for the species to be modelled mechanistically. An individual-based model was constructed to explicitly represent the locations and movement patterns of individual mice. This together with the schedule of pesticide application allows prediction of the risk to the population from pesticide exposure. The model included life-history traits of wood mice as well as typical landscape dynamics in agricultural farmland in the UK. The model obtains a good fit to the available population data and is fit for risk assessment purposes. It can help identify spatio-temporal situations with the largest potential risk of exposure and enables extrapolation from individual-level endpoints to population-level effects. Largest risk of exposure to pesticides was found when good crop growth in the “sink” fields coincided with high “source” population densities in the hedgerows. Keywords: Population dynamics, Pesticides, Ecological risk assessment, Habitat choice, Agent-based model, NetLogo
Resumo:
In order to assist in comparing the computational techniques used in different models, the authors propose a standardized set of one-dimensional numerical experiments that could be completed for each model. The results of these experiments, with a simplified form of the computational representation for advection, diffusion, pressure gradient term, Coriolis term, and filter used in the models, should be reported in the peer-reviewed literature. Specific recommendations are described in this paper.
Resumo:
It is becoming increasingly important to be able to verify the spatial accuracy of precipitation forecasts, especially with the advent of high-resolution numerical weather prediction (NWP) models. In this article, the fractions skill score (FSS) approach has been used to perform a scale-selective evaluation of precipitation forecasts during 2003 from the Met Office mesoscale model (12 km grid length). The investigation shows how skill varies with spatial scale, the scales over which the data assimilation (DA) adds most skill, and how the loss of that skill is dependent on both the spatial scale and the rainfall coverage being examined. Although these results come from a specific model, they demonstrate how this verification approach can provide a quantitative assessment of the spatial behaviour of new finer-resolution models and DA techniques.
Resumo:
As a major mode of intraseasonal variability, which interacts with weather and climate systems on a near-global scale, the Madden – Julian Oscillation (MJO) is a crucial source of predictability for numerical weather prediction (NWP) models. Despite its global significance and comprehensive investigation, improvements in the representation of the MJO in an NWP context remain elusive. However, recent modifications to the model physics in the ECMWF model led to advances in the representation of atmospheric variability and the unprecedented propagation of the MJO signal through the entire integration period. In light of these recent advances, a set of hindcast experiments have been designed to assess the sensitivity of MJO simulation to the formulation of convection. Through the application of established MJO diagnostics, it is shown that the improvements in the representation of the MJO can be directly attributed to the modified convective parametrization. Furthermore, the improvements are attributed to the move from a moisture-convergent- to a relative-humidity-dependent formulation for organized deep entrainment. It is concluded that, in order to understand the physical mechanisms through which a relative-humidity-dependent formulation for entrainment led to an improved simulation of the MJO, a more process-based approach should be taken. T he application of process-based diagnostics t o t he hindcast experiments presented here will be the focus of Part II of this study.
Resumo:
Mesospheric temperature inversions are well established observed phenomena, yet their properties remain the subject of ongoing research. Comparisons between Rayleigh-scatter lidar temperature measurements obtained by the University of Western Ontario's Purple Crow Lidar (42.9°N, 81.4°W) and the Canadian Middle Atmosphere Model are used to quantify the statistics of inversions. In both model and measurements, inversions occur most frequently in the winter and exhibit an average amplitude of ∼10 K. The model exhibits virtually no inversions in the summer, while the measurements show a strongly reduced frequency of occurrence with an amplitude about half that in the winter. A simple theory of mesospheric inversions based on wave saturation is developed, with no adjustable parameters. It predicts that the environmental lapse rate must be less than half the adiabatic lapse rate for an inversion to form, and it predicts the ratio of the inversion amplitude and thickness as a function of environmental lapse rate. Comparison of this prediction to the actual amplitude/thickness ratio using the lidar measurements shows good agreement between theory and measurements.