952 resultados para Numerical example
Resumo:
The high computational cost of calculating the radiative heating rates in numerical weather prediction (NWP) and climate models requires that calculations are made infrequently, leading to poor sampling of the fast-changing cloud field and a poor representation of the feedback that would occur. This paper presents two related schemes for improving the temporal sampling of the cloud field. Firstly, the ‘split time-stepping’ scheme takes advantage of the independent nature of the monochromatic calculations of the ‘correlated-k’ method to split the calculation into gaseous absorption terms that are highly dependent on changes in cloud (the optically thin terms) and those that are not (optically thick). The small number of optically thin terms can then be calculated more often to capture changes in the grey absorption and scattering associated with cloud droplets and ice crystals. Secondly, the ‘incremental time-stepping’ scheme uses a simple radiative transfer calculation using only one or two monochromatic calculations representing the optically thin part of the atmospheric spectrum. These are found to be sufficient to represent the heating rate increments caused by changes in the cloud field, which can then be added to the last full calculation of the radiation code. We test these schemes in an operational forecast model configuration and find a significant improvement is achieved, for a small computational cost, over the current scheme employed at the Met Office. The ‘incremental time-stepping’ scheme is recommended for operational use, along with a new scheme to correct the surface fluxes for the change in solar zenith angle between radiation calculations.
Resumo:
There has been a significant increase in the skill and resolution of numerical weather prediction models (NWPs) in recent decades, extending the time scales of useful weather predictions. The land-surface models (LSMs) of NWPs are often employed in hydrological applications, which raises the question of how hydrologically representative LSMs really are. In this paper, precipitation (P), evaporation (E) and runoff (R) from the European Centre for Medium-Range Weather Forecasts (ECMWF) global models were evaluated against observational products. The forecasts differ substantially from observed data for key hydrological variables. In addition, imbalanced surface water budgets, mostly caused by data assimilation, were found on both global (P-E) and basin scales (P-E-R), with the latter being more important. Modeled surface fluxes should be used with care in hydrological applications and further improvement in LSMs in terms of process descriptions, resolution and estimation of uncertainties is needed to accurately describe the land-surface water budgets.
Resumo:
Nutrient enrichment and drought conditions are major threats to lowland rivers causing ecosystem degradation and composition changes in plant communities. The controls on primary producer composition in chalk rivers are investigated using a new model and existing data from the River Frome (UK) to explore abiotic and biotic interactions. The growth and interaction of four primary producer functional groups (suspended algae, macrophytes, epiphytes, sediment biofilm) were successfully linked with flow, nutrients (N, P), light and water temperature such that the modelled biomass dynamics of the four groups matched that of the observed. Simulated growth of suspended algae was limited mainly by the residence time of the river rather than in-stream phosphorus concentrations. The simulated growth of the fixed vegetation (macrophytes, epiphytes, sediment biofilm) was overwhelmingly controlled by incoming solar radiation and light attenuation in the water column. Nutrients and grazing have little control when compared to the other physical controls in the simulations. A number of environmental threshold values were identified in the model simulations for the different producer types. The simulation results highlighted the importance of the pelagic–benthic interactions within the River Frome and indicated that process interaction defined the behaviour of the primary producers, rather than a single, dominant driver. The model simulations pose interesting questions to be considered in the next iteration of field- and laboratory based studies.
Resumo:
The Jülich Observatory for Cloud Evolution (JOYCE), located at Forschungszentrum Jülich in the most western part of Germany, is a recently established platform for cloud research. The main objective of JOYCE is to provide observations, which improve our understanding of the cloudy boundary layer in a midlatitude environment. Continuous and temporally highly resolved measurements that are specifically suited to characterize the diurnal cycle of water vapor, stability, and turbulence in the lower troposphere are performed with a special focus on atmosphere–surface interaction. In addition, instruments are set up to measure the micro- and macrophysical properties of clouds in detail and how they interact with different boundary layer processes and the large-scale synoptic situation. For this, JOYCE is equipped with an array of state-of-the-art active and passive remote sensing and in situ instruments, which are briefly described in this scientific overview. As an example, a 24-h time series of the evolution of a typical cumulus cloud-topped boundary layer is analyzed with respect to stability, turbulence, and cloud properties. Additionally, we present longer-term statistics, which can be used to elucidate the diurnal cycle of water vapor, drizzle formation through autoconversion, and warm versus cold rain precipitation formation. Both case studies and long-term observations are important for improving the representation of clouds in climate and numerical weather prediction models.
Resumo:
4-Dimensional Variational Data Assimilation (4DVAR) assimilates observations through the minimisation of a least-squares objective function, which is constrained by the model flow. We refer to 4DVAR as strong-constraint 4DVAR (sc4DVAR) in this thesis as it assumes the model is perfect. Relaxing this assumption gives rise to weak-constraint 4DVAR (wc4DVAR), leading to a different minimisation problem with more degrees of freedom. We consider two wc4DVAR formulations in this thesis, the model error formulation and state estimation formulation. The 4DVAR objective function is traditionally solved using gradient-based iterative methods. The principle method used in Numerical Weather Prediction today is the Gauss-Newton approach. This method introduces a linearised `inner-loop' objective function, which upon convergence, updates the solution of the non-linear `outer-loop' objective function. This requires many evaluations of the objective function and its gradient, which emphasises the importance of the Hessian. The eigenvalues and eigenvectors of the Hessian provide insight into the degree of convexity of the objective function, while also indicating the difficulty one may encounter while iterative solving 4DVAR. The condition number of the Hessian is an appropriate measure for the sensitivity of the problem to input data. The condition number can also indicate the rate of convergence and solution accuracy of the minimisation algorithm. This thesis investigates the sensitivity of the solution process minimising both wc4DVAR objective functions to the internal assimilation parameters composing the problem. We gain insight into these sensitivities by bounding the condition number of the Hessians of both objective functions. We also precondition the model error objective function and show improved convergence. We show that both formulations' sensitivities are related to error variance balance, assimilation window length and correlation length-scales using the bounds. We further demonstrate this through numerical experiments on the condition number and data assimilation experiments using linear and non-linear chaotic toy models.
Resumo:
There is an increasing evidence base for the effectiveness of Behavioural Activation in treating adult depression; however, there has been little investigation of using this approach with adolescents. This article reports on the adaptation of brief Behavioural Activation for Depression (BATD) for adolescents (BATD-A). A case study is reported to illustrate the brief structured approach, treatment response as indicated by routine outcome measures, and the family’s view of the intervention. The adaptations made to the adult BATD manual are discussed including parental input, adapted values and activities, and engagement issues. It is hoped that following further evaluation, BATD-A could be successfully delivered as a low-intensity intervention for depression.
Resumo:
Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.
Resumo:
An equation of Monge-Ampère type has, for the first time, been solved numerically on the surface of the sphere in order to generate optimally transported (OT) meshes, equidistributed with respect to a monitor function. Optimal transport generates meshes that keep the same connectivity as the original mesh, making them suitable for r-adaptive simulations, in which the equations of motion can be solved in a moving frame of reference in order to avoid mapping the solution between old and new meshes and to avoid load balancing problems on parallel computers. The semi-implicit solution of the Monge-Ampère type equation involves a new linearisation of the Hessian term, and exponential maps are used to map from old to new meshes on the sphere. The determinant of the Hessian is evaluated as the change in volume between old and new mesh cells, rather than using numerical approximations to the gradients. OT meshes are generated to compare with centroidal Voronoi tesselations on the sphere and are found to have advantages and disadvantages; OT equidistribution is more accurate, the number of iterations to convergence is independent of the mesh size, face skewness is reduced and the connectivity does not change. However anisotropy is higher and the OT meshes are non-orthogonal. It is shown that optimal transport on the sphere leads to meshes that do not tangle. However, tangling can be introduced by numerical errors in calculating the gradient of the mesh potential. Methods for alleviating this problem are explored. Finally, OT meshes are generated using observed precipitation as a monitor function, in order to demonstrate the potential power of the technique.
Resumo:
Adolescence is a unique period in human development encompassing sexual maturation (puberty) and the physical and psychological transition into adulthood. It is a crucial time for healthy development and any adverse environmental conditions, poor nutrition, or chronic infection can alter the timing of these physical changes; delaying menarche in girls or the age of peak height velocity in boys. This study explores the impact of chronic illness on the tempo of puberty in 607 adolescent skeletons from medieval England (AD 900-1550). A total of 135 (22.2%) adolescents showed some delay in their pubertal development, and this lag increased with age. Of those with a chronic condition, 40.0% (n=24/60) showed delay compared to only 20.3% (n=111/547) of the non-pathology group. This difference was statistically significant. A binary logistic regression model demonstrated a significant association between increasing delay in pubertal stage attainment with age in the pathology group. This is the first time that chronic conditions have been directly associated with a delay in maturation in the osteological record, using a new method to assess stages of puberty in skeletal remains.
Resumo:
In this work, we prove a weak Noether-type Theorem for a class of variational problems that admit broken extremals. We use this result to prove discrete Noether-type conservation laws for a conforming finite element discretisation of a model elliptic problem. In addition, we study how well the finite element scheme satisfies the continuous conservation laws arising from the application of Noether’s first theorem (1918). We summarise extensive numerical tests, illustrating the conservation of the discrete Noether law using the p-Laplacian as an example and derive a geometric-based adaptive algorithm where an appropriate Noether quantity is the goal functional.
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.
Resumo:
Floods are the most frequent of natural disasters, affecting millions of people across the globe every year. The anticipation and forecasting of floods at the global scale is crucial to preparing for severe events and providing early awareness where local flood models and warning services may not exist. As numerical weather prediction models continue to improve, operational centres are increasingly using the meteorological output from these to drive hydrological models, creating hydrometeorological systems capable of forecasting river flow and flood events at much longer lead times than has previously been possible. Furthermore, developments in, for example, modelling capabilities, data and resources in recent years have made it possible to produce global scale flood forecasting systems. In this paper, the current state of operational large scale flood forecasting is discussed, including probabilistic forecasting of floods using ensemble prediction systems. Six state-of-the-art operational large scale flood forecasting systems are reviewed, describing similarities and differences in their approaches to forecasting floods at the global and continental scale. Currently, operational systems have the capability to produce coarse-scale discharge forecasts in the medium-range and disseminate forecasts and, in some cases, early warning products, in real time across the globe, in support of national forecasting capabilities. With improvements in seasonal weather forecasting, future advances may include more seamless hydrological forecasting at the global scale, alongside a move towards multi-model forecasts and grand ensemble techniques, responding to the requirement of developing multi-hazard early warning systems for disaster risk reduction.
Resumo:
Previous versions of the Consortium for Small-scale Modelling (COSMO) numerical weather prediction model have used a constant sea-ice surface temperature, but observations show a high degree of variability on sub-daily timescales. To account for this, we have implemented a thermodynamic sea-ice module in COSMO and performed simulations at a resolution of 15 km and 5 km for the Laptev Sea area in April 2008. Temporal and spatial variability of surface and 2-m air temperature are verified by four automatic weather stations deployed along the edge of the western New Siberian polynya during the Transdrift XIII-2 expedition and by surface temperature charts derived from Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data. A remarkable agreement between the new model results and these observations demonstrates that the implemented sea-ice module can be applied for short-range simulations. Prescribing the polynya areas daily, our COSMO simulations provide a high-resolution and high-quality atmospheric data set for the Laptev Sea for the period 14-30 April 2008. Based on this data set, we derive a mean total sea-ice production rate of 0.53 km3/day for all Laptev Sea polynyas under the assumption that the polynyas are ice-free and a rate of 0.30 km3/day if a 10-cm-thin ice layer is assumed. Our results indicate that ice production in Laptev Sea polynyas has been overestimated in previous studies.
Resumo:
The sea ice export from the Arctic is of global importance due to its fresh water which influences the oceanic stratification and, thus, the global thermohaline circulation. This study deals with the effect of cyclones on sea ice and sea ice transport in particular on the basis of observations from two field experiments FRAMZY 1999 and FRAMZY 2002 in April 1999 and March 2002 as well as on the basis of simulations with a numerical sea ice model. The simulations realised by a dynamic-thermodynamic sea ice model are forced with 6-hourly atmospheric ECMWF- analyses (European Centre for Medium-Range Weather Forecasts) and 6-hourly oceanic data of a MPI-OM-simulation (Max-Planck-Institute Ocean Model). Comparing the observed and simulated variability of the sea ice drift and of the position of the ice edge shows that the chosen configuration of the model is appropriate for the performed studies. The seven observed cyclones change the position of the ice edge up to 100 km and cause an extensive decrease of sea ice coverage by 2 % up to more than 10 %. The decrease is only simulated by the model if the ocean current is strongly divergent in the centre of the cyclone. The impact is remarkable of the ocean current on divergence and shear deformation of the ice drift. As shown by sensitivity studies the ocean current at a depth of 6 m – the sea ice model is forced with – is mainly responsible for the ascertained differences between simulation and observation. The simulated sea ice transport shows a strong variability on a time scale from hours to days. Local minima occur in the time series of the ice transport during periods with Fram Strait cyclones. These minima are not caused by the local effect of the cyclone’s wind field, but mainly by the large-scale pattern of surface pressure. A displacement of the areas of strongest cyclone activity in the Nordic Seas would considerably influence the ice transport.
Resumo:
In the 1980s, in the midst of the AIDS epidemic, many countries introduced lifetime bans on blood donations by men who had sexual relations with men (MSM). These blanket bans have, recently, begun to be challenged and, as a result, many countries have either relaxed them or completely abolished them. The case under examination (Léger ) is another instance of questioning the legality of such a ban. In particular, in this case, the European Court of Justice was called on to rule on whether a measure such as the French lifetime exclusion from blood donation of the MSM population that was at issue before the referring court is contrary to EU law. The Court ruled that although discriminatory on the ground of sexual orientation, such a ban may be justified in certain circumstances, and left it to the national court to make the final decision. This article seeks to analyse the case and to explain why, in the author’s view, the Court can be accused of—once more—not going far enough in the protection of lesbian, gay and bisexual (LGB) rights.