945 resultados para temporal and spatial renderings
Resumo:
Temporal and spatial variability of aerosol optical depth (AOD) are examined using observations of direct solar radiation in the Eurasian Arctic for 1940-1990. AOD is estimated using empirical methods for 14 stations located between 66.2 degrees N and 80.6 degrees N, from the Kara Sea to the Chukchi Sea. While AOD exhibits a well-known springtime maximum and summertime minimum at all stations, atmospheric turbidity is higher in spring in the western (Kara-Laptev) part of the Eurasian Arctic. Between June and August, the eastern (East Siberian-Chukchi) sector experiences higher transparency than the western part. A statistically significant positive trend in AOD was observed in the Kara-Laptev sector between the late 1950s and the early 1930s predominantly in spring when pollution-derived aerosol dominates the Arctic atmosphere but not in the eastern sector. Although all stations are remote, those with positive trends are located closer to the anthropogenic sources of air pollution. By contrast, a widespread decline in AOD was observed between 1982 and 1990 in the eastern Arctic in spring but was limited to two sites in the western Arctic. These results suggest that the post-1982 decline in anthropogenic emissions in Europe and the former Soviet Union has had a limited effect on aerosol load in the Arctic. The post-1982 negative trends in AOD in summer, when marine aerosol is present in the atmosphere, were more common in the west. The relationships between AOD and atmospheric circulation are examined using a synoptic climatology approach. In spring, AOD depends primarily on the strength and direction of air flow. Thus strong westerly and northerly flows result in low AOD values in the East Siberian-Chukchi sector. By contrast, strong southerly flow associated with the passage of depressions results in high A OD in the Kara-Laptev sector and trajectory analysis points to the contribution of industrial regions of the sub-Arctic. In summer, low pressure gradient or anticyclonic conditions result in high atmospheric turbidity. The frequency of this weather type has declined significantly since the early 1980s in the Kara-Laptev sector, which partly explains the decline in summer AOD values. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
A study of the formation and propagation of volume anomalies in North Atlantic Mode Waters is presented, based on 100 yr of monthly mean fields taken from the control run of the Third Hadley Centre Coupled Ocean-Atmosphere GCM (HadCM3). Analysis of the temporal and. spatial variability in the thickness between pairs of isothermal surfaces bounding the central temperature of the three main North Atlantic subtropical mode waters shows that large-scale variability in formation occurs over time scales ranging from 5 to 20 yr. The largest formation anomalies are associated with a southward shift in the mixed layer isothermal distribution, possibly due to changes in the gyre dynamics and/or changes in the overlying wind field and air-sea heat fluxes. The persistence of these anomalies is shown to result from their subduction beneath the winter mixed layer base where they recirculate around the subtropical gyre in the background geostrophic flow. Anomalies in the warmest mode (18 degrees C) formed on the western side of the basin persist for up to 5 yr. They are removed by mixing transformation to warmer classes and are returned to the seasonal mixed layer near the Gulf Stream where the stored heat may be released to the atmosphere. Anomalies in the cooler modes (16 degrees and 14 degrees C) formed on the eastern side of the basin persist for up to 10 yr. There is no clear evidence of significant transformation of these cooler mode anomalies to adjacent classes. It has been proposed that the eastern anomalies are removed through a tropical-subtropical water mass exchange mechanism beneath the trade wind belt (south of 20 degrees N). The analysis shows that anomalous mode water formation plays a key role in the long-term storage of heat in the model, and that the release of heat associated with these anomalies suggests a predictable climate feedback mechanism.
Resumo:
The development of genetically modified (GM) crops has led the European Union (EU) to put forward the concept of 'coexistence' to give fanners the freedom to plant both conventional and GM varieties. Should a premium for non-GM varieties emerge in the market, 'contamination' by GM pollen would generate a negative externality to conventional growers. It is therefore important to assess the effect of different 'policy variables'on the magnitude of the externality to identify suitable policies to manage coexistence. In this paper, taking GM herbicide tolerant oilseed rape as a model crop, we start from the model developed in Ceddia et al. [Ceddia, M.G., Bartlett, M., Perrings, C., 2007. Landscape gene flow, coexistence and threshold effect: the case of genetically modified herbicide tolerant oilseed rape (Brassica napus). Ecol. Modell. 205, pp. 169-180] use a Monte Carlo experiment to generate data and then estimate the effect of the number of GM and conventional fields, width of buffer areas and the degree of spatial aggregation (i.e. the 'policy variables') on the magnitude of the externality at the landscape level. To represent realistic conditions in agricultural production, we assume that detection of GM material in conventional produce might occur at the field level (no grain mixing occurs) or at the silos level (where grain mixing from different fields in the landscape occurs). In the former case, the magnitude of the externality will depend on the number of conventional fields with average transgenic presence above a certain threshold. In the latter case, the magnitude of the externality will depend on whether the average transgenic presence across all conventional fields exceeds the threshold. In order to quantify the effect of the relevant' policy variables', we compute the marginal effects and the elasticities. Our results show that when relying on marginal effects to assess the impact of the different 'policy variables', spatial aggregation is far more important when transgenic material is detected at field level, corroborating previous research. However, when elasticity is used, the effectiveness of spatial aggregation in reducing the externality is almost identical whether detection occurs at field level or at silos level. Our results show also that the area planted with GM is the most important 'policy variable' in affecting the externality to conventional growers and that buffer areas on conventional fields are more effective than those on GM fields. The implications of the results for the coexistence policies in the EU are discussed. (C) 2008 Elsevier B.V. All rights reserved.
Comparison of Temporal and Standard Independent Component Analysis (ICA) Algorithms for EEG Analysis
The importance of the relationship between scale and process in understanding long-term DOC dynamics
Resumo:
Concentrations of dissolved organic carbon have increased in many, but not all, surface waters across acid impacted areas of Europe and North America over the last two decades. Over the last eight years several hypotheses have been put forward to explain these increases, but none are yet accepted universally. Research in this area appears to have reached a stalemate between those favouring declining atmospheric deposition, climate change or land management as the key driver of long-term DOC trends. While it is clear that many of these factors influence DOC dynamics in soil and stream waters, their effect varies over different temporal and spatial scales. We argue that regional differences in acid deposition loading may account for the apparent discrepancies between studies. DOC has shown strong monotonic increases in areas which have experienced strong downward trends in pollutant sulphur and/or seasalt deposition. Elsewhere climatic factors, that strongly influence seasonality, have also dominated inter-annual variability, and here long-term monotonic DOC trends are often difficult to detect. Furthermore, in areas receiving similar acid loadings, different catchment characteristics could have affected the site specific sensitivity to changes in acidity and therefore the magnitude of DOC release in response to changes in sulphur deposition. We suggest that confusion over these temporal and spatial scales of investigation has contributed unnecessarily to the disagreement over the main regional driver(s) of DOC trends, and that the data behind the majority of these studies is more compatible than is often conveyed.
Resumo:
A method of estimating dissipation rates from a vertically pointing Doppler lidar with high temporal and spatial resolution has been evaluated by comparison with independent measurements derived from a balloon-borne sonic anemometer. This method utilizes the variance of the mean Doppler velocity from a number of sequential samples and requires an estimate of the horizontal wind speed. The noise contribution to the variance can be estimated from the observed signal-to-noise ratio and removed where appropriate. The relative size of the noise variance to the observed variance provides a measure of the confidence in the retrieval. Comparison with in situ dissipation rates derived from the balloon-borne sonic anemometer reveal that this particular Doppler lidar is capable of retrieving dissipation rates over a range of at least three orders of magnitude. This method is most suitable for retrieval of dissipation rates within the convective well-mixed boundary layer where the scales of motion that the Doppler lidar probes remain well within the inertial subrange. Caution must be applied when estimating dissipation rates in more quiescent conditions. For the particular Doppler lidar described here, the selection of suitably short integration times will permit this method to be applicable in such situations but at the expense of accuracy in the Doppler velocity estimates. The two case studies presented here suggest that, with profiles every 4 s, reliable estimates of ϵ can be derived to within at least an order of magnitude throughout almost all of the lowest 2 km and, in the convective boundary layer, to within 50%. Increasing the integration time for individual profiles to 30 s can improve the accuracy substantially but potentially confines retrievals to within the convective boundary layer. Therefore, optimization of certain instrument parameters may be required for specific implementations.
Resumo:
The Earth-directed coronal mass ejection (CME) of 8 April 2010 provided an opportunity for space weather predictions from both established and developmental techniques to be made from near–real time data received from the SOHO and STEREO spacecraft; the STEREO spacecraft provide a unique view of Earth-directed events from outside the Sun-Earth line. Although the near–real time data transmitted by the STEREO Space Weather Beacon are significantly poorer in quality than the subsequently downlinked science data, the use of these data has the advantage that near–real time analysis is possible, allowing actual forecasts to be made. The fact that such forecasts cannot be biased by any prior knowledge of the actual arrival time at Earth provides an opportunity for an unbiased comparison between several established and developmental forecasting techniques. We conclude that for forecasts based on the STEREO coronagraph data, it is important to take account of the subsequent acceleration/deceleration of each CME through interaction with the solar wind, while predictions based on measurements of CMEs made by the STEREO Heliospheric Imagers would benefit from higher temporal and spatial resolution. Space weather forecasting tools must work with near–real time data; such data, when provided by science missions, is usually highly compressed and/or reduced in temporal/spatial resolution and may also have significant gaps in coverage, making such forecasts more challenging.
Resumo:
The dependence of much of Africa on rain fed agriculture leads to a high vulnerability to fluctuations in rainfall amount. Hence, accurate monitoring of near-real time rainfall is particularly useful, for example in forewarning possible crop shortfalls in drought-prone areas. Unfortunately, ground based observations are often inadequate. Rainfall estimates from satellite-based algorithms and numerical model outputs can fill this data gap, however rigorous assessment of such estimates is required. In this case, three satellite based products (NOAA-RFE 2.0, GPCP-1DD and TAMSAT) and two numerical model outputs (ERA-40 and ERA-Interim) have been evaluated for Uganda in East Africa using a network of 27 rain gauges. The study focuses on the years 2001 to 2005 and considers the main rainy season (February to June). All data sets were converted to the same temporal and spatial scales. Kriging was used for the spatial interpolation of the gauge data. All three satellite products showed similar characteristics and had a high level of skill that exceeded both model outputs. ERA-Interim had a tendency to overestimate whilst ERA-40 consistently underestimated the Ugandan rainfall.
Resumo:
The application of automatic segmentation methods in lesion detection is desirable. However, such methods are restricted by intensity similarities between lesioned and healthy brain tissue. Using multi-spectral magnetic resonance imaging (MRI) modalities may overcome this problem but it is not always practicable. In this article, a lesion detection approach requiring a single MRI modality is presented, which is an improved method based on a recent publication. This new method assumes that a low similarity should be found in the regions of lesions when the likeness between an intensity based fuzzy segmentation and a location based tissue probabilities is measured. The usage of a normalized similarity measurement enables the current method to fine-tune the threshold for lesion detection, thus maximizing the possibility of reaching high detection accuracy. Importantly, an extra cleaning step is included in the current approach which removes enlarged ventricles from detected lesions. The performance investigation using simulated lesions demonstrated that not only the majority of lesions were well detected but also normal tissues were identified effectively. Tests on images acquired in stroke patients further confirmed the strength of the method in lesion detection. When compared with the previous version, the current approach showed a higher sensitivity in detecting small lesions and had less false positives around the ventricle and the edge of the brain