61 resultados para Space and time


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In situ precipitation measurements can extremely differ in space and time. Taking into account the limited spatial–temporal representativity and the uncertainty of a single station is important for validating mesoscale numerical model results as well as for interpreting remote sensing data. In situ precipitation data from a high resolution network in North-Eastern Germany are analysed to determine their temporal and spatial representativity. For the dry year 2003 precipitation amounts were available with 10 min resolution from 14 rain gauges distributed in an area of 25 km 25 km around the Meteorological Observatory Lindenberg (Richard-Aßmann Observatory). Our analysis reveals that short-term (up to 6 h) precipitation events dominate (94% of all events) and that the distribution is skewed with a high frequency of very low precipitation amounts. Long-lasting precipitation events are rare (6% of all precipitation events), but account for nearly 50% of the annual precipitation. The spatial representativity of a single-site measurement increases slightly for longer measurement intervals and the variability decreases. Hourly precipitation amounts are representative for an area of 11 km 11 km. Daily precipitation amounts appear to be reliable with an uncertainty factor of 3.3 for an area of 25 km 25 km, and weekly and monthly precipitation amounts have uncertainties of a factor of 2 and 1.4 when compared to 25 km 25 km mean values.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using the record of 30 flank eruptions over the last 110 years at Nyamuragira, we have tested the relationship between the eruption dynamics and the local stress field. There are two groups of eruptions based on their duration (< 80days >) that are also clustered in space and time. We find that the eruptions fed by dykes parallel to the East African Rift Valley have longer durations (and larger volumes) than those eruptions fed by dykes with other orientations. This is compatible with a model for compressible magma transported through an elastic-walled dyke in a differential stress field from an over-pressured reservoir (Woods et al., 2006). The observed pattern of eruptive fissures is consistent with a local stress field modified by a northwest-trending, right lateral slip fault that is part of the northern transfer zone of the Kivu Basin rift segment. We have also re-tested with new data the stochastic eruption models for Nyamuragira of Burt et al. (1994). The time-predictable, pressure-threshold model remains the best fit and is consistent with the typically observed declining rate of sulphur dioxide emission during the first few days of eruption with lava emission from a depressurising, closed, crustal reservoir. The 2.4-fold increase in long-term eruption rate that occurred after 1977 is confirmed in the new analysis. Since that change, the record has been dominated by short-duration eruptions fed by dykes perpendicular to the Rift. We suggest that the intrusion of a major dyke during the 1977 volcano-tectonic event at neighbouring Nyiragongo volcano inhibited subsequent dyke formation on the southern flanks of Nyamuragira and this may also have resulted in more dykes reaching the surface elsewhere. Thus that sudden change in output was a result of a changed stress field that forced more of the deep magma supply to the surface. Another volcano-tectonic event in 2002 may also have changed the magma output rate at Nyamuragira.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The enhanced radar return associated with melting snow, ‘the bright band’, can lead to large overestimates of rain rates. Most correction schemes rely on fitting the radar observations to a vertical profile of reflectivity (VPR) which includes the bright band enhancement. Observations show that the VPR is very variable in space and time; large enhancements occur for melting snow, but none for the melting graupel in embedded convection. Applying a bright band VPR correction to a region of embedded convection will lead to a severe underestimate of rainfall. We revive an earlier suggestion that high values of the linear depolarisation ratio (LDR) are an excellent means of detecting when bright band contamination is occurring and that the value of LDR may be used to correct the value of Z in the bright band.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The very first numerical models which were developed more than 20 years ago were drastic simplifications of the real atmosphere and they were mostly restricted to describe adiabatic processes. For prediction of a day or two of the mid tropospheric flow these models often gave reasonable results but the result deteriorated quickly when the prediction was extended further in time. The prediction of the surface flow was unsatisfactory even for short predictions. It was evident that both the energy generating processes as well as the dissipative processes have to be included in numerical models in order to predict the weather patterns in the lower part of the atmosphere and to predict the atmosphere in general beyond a day or two. Present-day computers make it possible to attack the weather forecasting problem in a more comprehensive and complete way and substantial efforts have been made during the last decade in particular to incorporate the non-adiabatic processes in numerical prediction models. The physics of radiational transfer, condensation of moisture, turbulent transfer of heat, momentum and moisture and the dissipation of kinetic energy are the most important processes associated with the formation of energy sources and sinks in the atmosphere and these have to be incorporated in numerical prediction models extended over more than a few days. The mechanisms of these processes are mainly related to small scale disturbances in space and time or even molecular processes. It is therefore one of the basic characteristics of numerical models that these small scale disturbances cannot be included in an explicit way. The reason for this is the discretization of the model's atmosphere by a finite difference grid or the use of a Galerkin or spectral function representation. The second reason why we cannot explicitly introduce these processes into a numerical model is due to the fact that some physical processes necessary to describe them (such as the local buoyance) are a priori eliminated by the constraints of hydrostatic adjustment. Even if this physical constraint can be relaxed by making the models non-hydrostatic the scale problem is virtually impossible to solve and for the foreseeable future we have to try to incorporate the ensemble or gross effect of these physical processes on the large scale synoptic flow. The formulation of the ensemble effect in terms of grid-scale variables (the parameters of the large-scale flow) is called 'parameterization'. For short range prediction of the synoptic flow at middle and high latitudes, very simple parameterization has proven to be rather successful.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As laid out in its convention there are 8 different objectives for ECMWF. One of the major objectives will consist of the preparation, on a regular basis, of the data necessary for the preparation of medium-range weather forecasts. The interpretation of this item is that the Centre will make forecasts once a day for a prediction period of up to 10 days. It is also evident that the Centre should not carry out any real weather forecasting but merely disseminate to the member countries the basic forecasting parameters with an appropriate resolution in space and time. It follows from this that the forecasting system at the Centre must from the operational point of view be functionally integrated with the Weather Services of the Member Countries. The operational interface between ECMWF and the Member Countries must be properly specified in order to get a reasonable flexibility for both systems. The problem of making numerical atmospheric predictions for periods beyond 4-5 days differs substantially from 2-3 days forecasting. From the physical point we can define a medium range forecast as a forecast where the initial disturbances have lost their individual structure. However we are still interested to predict the atmosphere in a similar way as in short range forecasting which means that the model must be able to predict the dissipation and decay of the initial phenomena and the creation of new ones. With this definition, medium range forecasting is indeed very difficult and generally regarded as more difficult than extended forecasts, where we usually only predict time and space mean values. The predictability of atmospheric flow has been extensively studied during the last years in theoretical investigations and by numerical experiments. As has been discussed elsewhere in this publication (see pp 338 and 431) a 10-day forecast is apparently on the fringe of predictability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently there are few observations of the urban wind field at heights other than rooftop level. Remote sensing instruments such as Doppler lidars provide wind speed data at many heights, which would be useful in determining wind loadings of tall buildings, and predicting local air quality. Studies comparing remote sensing with traditional anemometers carried out in flat, homogeneous terrain often use scan patterns which take several minutes. In an urban context the flow changes quickly in space and time, so faster scans are required to ensure little change in the flow over the scan period. We compare 3993 h of wind speed data collected using a three-beam Doppler lidar wind profiling method with data from a sonic anemometer (190 m). Both instruments are located in central London, UK; a highly built-up area. Based on wind profile measurements every 2 min, the uncertainty in the hourly mean wind speed due to the sampling frequency is 0.05–0.11 m s−1. The lidar tended to overestimate the wind speed by ≈0.5 m s−1 for wind speeds below 20 m s−1. Accuracy may be improved by increasing the scanning frequency of the lidar. This method is considered suitable for use in urban areas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The transcriptome of an organism is its set of gene transcripts (mRNAs) at a defined spatial and temporal locus. Because gene expression is affected markedly by environmental and developmental perturbations, it is widely assumed that transcriptome divergence among taxa represents adaptive phenotypic selection. This assumption has been challenged by neutral theories which propose that stochastic processes drive transcriptome evolution. To test for evidence of neutral transcriptome evolution in plants, we quantified 18 494 gene transcripts in nonsenescent leaves of 14 taxa of Brassicaceae using robust cross-species transcriptomics which includes a two-step physical and in silico-based normalization procedure based on DNA similarity among taxa. Transcriptome divergence correlates positively with evolutionary distance between taxa and with variation in gene expression among samples. Results are similar for pseudogenes and chloroplast genes evolving at different rates. Remarkably, variation in transcript abundance among root-cell samples correlates positively with transcriptome divergence among root tissues and among taxa. Because neutral processes affect transcriptome evolution in plants, many differences in gene expression among or within taxa may be nonfunctional, reflecting ancestral plasticity and founder effects. Appropriate null models are required when comparing transcriptomes in space and time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Global syntheses of palaeoenvironmental data are required to test climate models under conditions different from the present. Data sets for this purpose contain data from spatially extensive networks of sites. The data are either directly comparable to model output or readily interpretable in terms of modelled climate variables. Data sets must contain sufficient documentation to distinguish between raw (primary) and interpreted (secondary, tertiary) data, to evaluate the assumptions involved in interpretation of the data, to exercise quality control, and to select data appropriate for specific goals. Four data bases for the Late Quaternary, documenting changes in lake levels since 30 kyr BP (the Global Lake Status Data Base), vegetation distribution at 18 kyr and 6 kyr BP (BIOME 6000), aeolian accumulation rates during the last glacial-interglacial cycle (DIRTMAP), and tropical terrestrial climates at the Last Glacial Maximum (the LGM Tropical Terrestrial Data Synthesis) are summarised. Each has been used to evaluate simulations of Last Glacial Maximum (LGM: 21 calendar kyr BP) and/or mid-Holocene (6 cal. kyr BP) environments. Comparisons have demonstrated that changes in radiative forcing and orography due to orbital and ice-sheet variations explain the first-order, broad-scale (in space and time) features of global climate change since the LGM. However, atmospheric models forced by 6 cal. kyr BP orbital changes with unchanged surface conditions fail to capture quantitative aspects of the observed climate, including the greatly increased magnitude and northward shift of the African monsoon during the early to mid-Holocene. Similarly, comparisons with palaeoenvironmental datasets show that atmospheric models have underestimated the magnitude of cooling and drying of much of the land surface at the LGM. The inclusion of feedbacks due to changes in ocean- and land-surface conditions at both times, and atmospheric dust loading at the LGM, appears to be required in order to produce a better simulation of these past climates. The development of Earth system models incorporating the dynamic interactions among ocean, atmosphere, and vegetation is therefore mandated by Quaternary science results as well as climatological principles. For greatest scientific benefit, this development must be paralleled by continued advances in palaeodata analysis and synthesis, which in turn will help to define questions that call for new focused data collection efforts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In theory, enrichment of resource in a predator-prey model leads to destabilization of the system, thereby collapsing the trophic interaction, a phenomenon referred to as "the paradox of enrichment". After it was first proposed by Rosenzweig (1971), a number of subsequent studies were carried out on this dilemma over many decades. In this article, we review these theoretical and experimental works and give a brief overview of the proposed solutions to the paradox. The mechanisms that have been discussed are modifications of simple predator-prey models in the presence of prey that is inedible, invulnerable, unpalatable and toxic. Another class of mechanisms includes an incorporation of a ratio-dependent functional form, inducible defence of prey and density-dependent mortality of the predator. Moreover, we find a third set of explanations based on complex population dynamics including chaos in space and time. We conclude that, although any one of the various mechanisms proposed so far might potentially prevent destabilization of the predator-prey dynamics following enrichment, in nature different mechanisms may combine to cause stability, even when a system is enriched. The exact mechanisms, which may differ among systems, need to be disentangled through extensive field studies and laboratory experiments coupled with realistic theoretical models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In addition to CO2, the climate impact of aviation is strongly influenced by non-CO2 emissions, such as nitrogen oxides, influencing ozone and methane, and water vapour, which can lead to the formation of persistent contrails in ice-supersaturated regions. Because these non-CO2 emission effects are characterised by a short lifetime, their climate impact largely depends on emission location and time; that is to say, emissions in certain locations (or times) can lead to a greater climate impact (even on the global average) than the same emission in other locations (or times). Avoiding these climate-sensitive regions might thus be beneficial to climate. Here, we describe a modelling chain for investigating this climate impact mitigation option. This modelling chain forms a multi-step modelling approach, starting with the simulation of the fate of emissions released at a certain location and time (time-region grid points). This is performed with the chemistry–climate model EMAC, extended via the two submodels AIRTRAC (V1.0) and CONTRAIL (V1.0), which describe the contribution of emissions to the composition of the atmosphere and to contrail formation, respectively. The impact of emissions from the large number of time-region grid points is efficiently calculated by applying a Lagrangian scheme. EMAC also includes the calculation of radiative impacts, which are, in a second step, the input to climate metric formulas describing the global climate impact of the emission at each time-region grid point. The result of the modelling chain comprises a four-dimensional data set in space and time, which we call climate cost functions and which describes the global climate impact of an emission at each grid point and each point in time. In a third step, these climate cost functions are used in an air traffic simulator (SAAM) coupled to an emission tool (AEM) to optimise aircraft trajectories for the North Atlantic region. Here, we describe the details of this new modelling approach and show some example results. A number of sensitivity analyses are performed to motivate the settings of individual parameters. A stepwise sanity check of the results of the modelling chain is undertaken to demonstrate the plausibility of the climate cost functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While state-of-the-art models of Earth's climate system have improved tremendously over the last 20 years, nontrivial structural flaws still hinder their ability to forecast the decadal dynamics of the Earth system realistically. Contrasting the skill of these models not only with each other but also with empirical models can reveal the space and time scales on which simulation models exploit their physical basis effectively and quantify their ability to add information to operational forecasts. The skill of decadal probabilistic hindcasts for annual global-mean and regional-mean temperatures from the EU Ensemble-Based Predictions of Climate Changes and Their Impacts (ENSEMBLES) project is contrasted with several empirical models. Both the ENSEMBLES models and a “dynamic climatology” empirical model show probabilistic skill above that of a static climatology for global-mean temperature. The dynamic climatology model, however, often outperforms the ENSEMBLES models. The fact that empirical models display skill similar to that of today's state-of-the-art simulation models suggests that empirical forecasts can improve decadal forecasts for climate services, just as in weather, medium-range, and seasonal forecasting. It is suggested that the direct comparison of simulation models with empirical models becomes a regular component of large model forecast evaluations. Doing so would clarify the extent to which state-of-the-art simulation models provide information beyond that available from simpler empirical models and clarify current limitations in using simulation forecasting for decision support. Ultimately, the skill of simulation models based on physical principles is expected to surpass that of empirical models in a changing climate; their direct comparison provides information on progress toward that goal, which is not available in model–model intercomparisons.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Well-resolved air–sea interactions are simulated in a new ocean mixed-layer, coupled configuration of the Met Office Unified Model (MetUM-GOML), comprising the MetUM coupled to the Multi-Column K Profile Parameterization ocean (MC-KPP). This is the first globally coupled system which provides a vertically resolved, high near-surface resolution ocean at comparable computational cost to running in atmosphere-only mode. As well as being computationally inexpensive, this modelling framework is adaptable– the independent MC-KPP columns can be applied selectively in space and timeand controllable – by using temperature and salinity corrections the model can be constrained to any ocean state. The framework provides a powerful research tool for process-based studies of the impact of air–sea interactions in the global climate system. MetUM simulations have been performed which separate the impact of introducing inter- annual variability in sea surface temperatures (SSTs) from the impact of having atmosphere–ocean feedbacks. The representation of key aspects of tropical and extratropical variability are used to assess the performance of these simulations. Coupling the MetUM to MC-KPP is shown, for example, to reduce tropical precipitation biases, improve the propagation of, and spectral power associated with, the Madden–Julian Oscillation and produce closer-to-observed patterns of springtime blocking activity over the Euro-Atlantic region.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the international FRAMZY expedition in March 2002 in-situ observations of Fram Strait cyclones were made by aircraft, ship and automatic buoys in order to study the interaction between cyclones and sea ice. The atmospheric characteristics of the observed cyclones are presented in this paper. The cyclones were generated in the baroclinic zone at the ice edge and moved NNE-ward along the ice edge. This was supported by warm air advection from WSW by an upper-level wave. The cyclones were rather small (diameter 200– 700 km) and shallow (1–1.5 km e-folding height for the horizontal pressure and temperature difference) with life times between 12 and 36 hours. In spite of the small space and time scales, remarkable extremes were observed within the cyclones. Winds reached maxima above 20 ms−1 lasting for only a few hours. The transition from the cold to the advancing warm air over sea ice occurred within narrow (5–30 km) frontal zones in which vorticity and convergence reached maxima on the order of 10−3 s−1. It is discussed whether the sea ice in spite of its inertia is able to react on these strong sub cyclone-scale processes and, thus, these processes have to be taken into account in models in order to simulate the cyclone-sea ice interaction properly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Remotely sensed rainfall is increasingly being used to manage climate-related risk in gauge sparse regions. Applications based on such data must make maximal use of the skill of the methodology in order to avoid doing harm by providing misleading information. This is especially challenging in regions, such as Africa, which lack gauge data for validation. In this study, we show how calibrated ensembles of equally likely rainfall can be used to infer uncertainty in remotely sensed rainfall estimates, and subsequently in assessment of drought. We illustrate the methodology through a case study of weather index insurance (WII) in Zambia. Unlike traditional insurance, which compensates proven agricultural losses, WII pays out in the event that a weather index is breached. As remotely sensed rainfall is used to extend WII schemes to large numbers of farmers, it is crucial to ensure that the indices being insured are skillful representations of local environmental conditions. In our study we drive a land surface model with rainfall ensembles, in order to demonstrate how aggregation of rainfall estimates in space and time results in a clearer link with soil moisture, and hence a truer representation of agricultural drought. Although our study focuses on agricultural insurance, the methodological principles for application design are widely applicable in Africa and elsewhere.