123 resultados para High dynamic range


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Canopy leaf area index (LAI), defined as the single-sided leaf area per unit ground area, is a quantitative measure of canopy foliar area. LAI is a controlling biophysical property of vegetation function, and quantifying LAI is thus vital for understanding energy, carbon and water fluxes between the land surface and the atmosphere. LAI is routinely available from Earth Observation (EO) instruments such as MODIS. However EO-derived estimates of LAI require validation before they are utilised by the ecosystem modelling community. Previous validation work on the MODIS collection 4 (c4) product suggested considerable error especially in forested biomes, and as a result significant modification of the MODIS LAI algorithm has been made for the most recent collection 5 (c5). As a result of these changes the current MODIS LAI product has not been widely validated. We present a validation of the MODIS c5 LAI product over a 121 km2 area of mixed coniferous forest in Oregon, USA, based on detailed ground measurements which we have upscaled using high resolution EO data. Our analysis suggests that c5 shows a much more realistic temporal LAI dynamic over c4 values for the site we examined. We find improved spatial consistency between the MODIS c5 LAI product and upscaled in situ measurements. However results also suggest that the c5 LAI product underestimates the upper range of upscaled in situ LAI measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large-scale bottom-up estimates of terrestrial carbon fluxes, whether based on models or inventory, are highly dependent on the assumed land cover. Most current land cover and land cover change maps are based on satellite data and are likely to be so for the foreseeable future. However, these maps show large differences, both at the class level and when transformed into Plant Functional Types (PFTs), and these can lead to large differences in terrestrial CO2 fluxes estimated by Dynamic Vegetation Models. In this study the Sheffield Dynamic Global Vegetation Model is used. We compare PFT maps and the resulting fluxes arising from the use of widely available moderate (1 km) resolution satellite-derived land cover maps (the Global Land Cover 2000 and several MODIS classification schemes), with fluxes calculated using a reference high (25 m) resolution land cover map specific to Great Britain (the Land Cover Map 2000). We demonstrate that uncertainty is introduced into carbon flux calculations by (1) incorrect or uncertain assignment of land cover classes to PFTs; (2) information loss at coarser resolutions; (3) difficulty in discriminating some vegetation types from satellite data. When averaged over Great Britain, modeled CO2 fluxes derived using the different 1 km resolution maps differ from estimates made using the reference map. The ranges of these differences are 254 gC m−2 a−1 in Gross Primary Production (GPP); 133 gC m−2 a−1 in Net Primary Production (NPP); and 43 gC m−2 a−1 in Net Ecosystem Production (NEP). In GPP this accounts for differences of −15.8% to 8.8%. Results for living biomass exhibit a range of 1109 gC m−2. The types of uncertainties due to land cover confusion are likely to be representative of many parts of the world, especially heterogeneous landscapes such as those found in western Europe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An isolate of L. monocytogenes Scott A that is tolerant to high hydrostatic pressure (HHP), named AK01, was isolated upon a single pressurization treatment of 400 MPa for 20 min and was further characterized. The survival of exponential- and stationary-phase cells of AK01 in ACES [N-(2-acetamido)-2-aminoethanesulfonic acid] buffer was at least 2 log units higher than that of the wild type over a broad range of pressures (150 to 500 MPa), while both strains showed higher HHP tolerance (piezotolerance) in the stationary than in the exponential phase of growth. In semiskim milk, exponential-phase cells of both strains showed lower reductions upon pressurization than in buffer, but again, AK01 was more piezotolerant than the wild type. The piezotolerance of AK01 was retained for at least 40 generations in rich medium, suggesting a stable phenotype. Interestingly, cells of AK01 lacked flagella, were elongated, and showed slightly lower maximum specific growth rates than the wild type at 8, 22, and 30°C. Moreover, the piezotolerant strain AK01 showed increased resistance to heat, acid, and H2O2 compared with the wild type. The difference in HHP tolerance between the piezotolerant strain and the wild-type strain could not be attributed to differences in membrane fluidity, since strain AK01 and the wild type had identical in situ lipid melting curves as determined by Fourier transform infrared spectroscopy. The demonstrated occurrence of a piezotolerant isolate of L. monocytogenes underscores the need to further investigate the mechanisms underlying HHP resistance of food-borne microorganisms, which in turn will contribute to the appropriate design of safe, accurate, and feasible HHP treatments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Predictability is considered in the context of the seamless weather-climate prediction problem, and the notion is developed that there can be predictive power on all time-scales. On all scales there are phenomena that occur as well as longer time-scales and external conditions that should combine to give some predictability. To what extent this theoretical predictability may actually be realised and, further, to what extent it may be useful is not clear. However the potential should provide a stimulus to, and high profile for, our science and its application for many years.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With many operational centers moving toward order 1-km-gridlength models for routine weather forecasting, this paper presents a systematic investigation of the properties of high-resolution versions of the Met Office Unified Model for short-range forecasting of convective rainfall events. The authors describe a suite of configurations of the Met Office Unified Model running with grid lengths of 12, 4, and 1 km and analyze results from these models for a number of convective cases from the summers of 2003, 2004, and 2005. The analysis includes subjective evaluation of the rainfall fields and comparisons of rainfall amounts, initiation, cell statistics, and a scale-selective verification technique. It is shown that the 4- and 1-km-gridlength models often give more realistic-looking precipitation fields because convection is represented explicitly rather than parameterized. However, the 4-km model representation suffers from large convective cells and delayed initiation because the grid length is too long to correctly reproduce the convection explicitly. These problems are not as evident in the 1-km model, although it does suffer from too numerous small cells in some situations. Both the 4- and 1-km models suffer from poor representation at the start of the forecast in the period when the high-resolution detail is spinning up from the lower-resolution (12 km) starting data used. A scale-selective precipitation verification technique implies that for later times in the forecasts (after the spinup period) the 1-km model performs better than the 12- and 4-km models for lower rainfall thresholds. For higher thresholds the 4-km model scores almost as well as the 1-km model, and both do better than the 12-km model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At the end of the 20th century, we can look back on a spectacular development of numerical weather prediction, which has, practically uninterrupted, been going on since the middle of the century. High-resolution predictions for more than a week ahead for any part of the globe are now routinely produced and anyone with an Internet connection can access many of these forecasts for anywhere in the world. Extended predictions for several seasons ahead are also being done — the latest El Niño event in 1997/1998 is an example of such a successful prediction. The great achievement is due to a number of factors including the progress in computational technology and the establishment of global observing systems, combined with a systematic research program with an overall strategy towards building comprehensive prediction systems for climate and weather. In this article, I will discuss the different evolutionary steps in this development and the way new scientific ideas have contributed to efficiently explore the computing power and in using observations from new types of observing systems. Weather prediction is not an exact science due to unavoidable errors in initial data and in the models. To quantify the reliability of a forecast is therefore essential and probably more so the longer the forecasts are. Ensemble prediction is thus a new and important concept in weather and climate prediction, which I believe will become a routine aspect of weather prediction in the future. The limit between weather and climate prediction is becoming more and more diffuse and in the final part of this article I will outline the way I think development may proceed in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Long-range global climate forecasts have been made by use of a model for predicting a tropical Pacific sea surface temperature (SST) in tandem with an atmospheric general circulation model. The SST is predicted first at long lead times into the future. These ocean forecasts are then used to force the atmospheric model and so produce climate forecasts at lead times of the SST forecasts. Prediction of the wintertime 500 mb height, surface air temperature and precipitation for seven large climatic events of the 1970–1990s by this two-tiered technique agree well in general with observations over many regions of the globe. The levels of agreement are high enough in some regions to have practical utility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Total ozone trends are typically studied using linear regression models that assume a first-order autoregression of the residuals [so-called AR(1) models]. We consider total ozone time series over 60°S–60°N from 1979 to 2005 and show that most latitude bands exhibit long-range correlated (LRC) behavior, meaning that ozone autocorrelation functions decay by a power law rather than exponentially as in AR(1). At such latitudes the uncertainties of total ozone trends are greater than those obtained from AR(1) models and the expected time required to detect ozone recovery correspondingly longer. We find no evidence of LRC behavior in southern middle-and high-subpolar latitudes (45°–60°S), where the long-term ozone decline attributable to anthropogenic chlorine is the greatest. We thus confirm an earlier prediction based on an AR(1) analysis that this region (especially the highest latitudes, and especially the South Atlantic) is the optimal location for the detection of ozone recovery, with a statistically significant ozone increase attributable to chlorine likely to be detectable by the end of the next decade. In northern middle and high latitudes, on the other hand, there is clear evidence of LRC behavior. This increases the uncertainties on the long-term trend attributable to anthropogenic chlorine by about a factor of 1.5 and lengthens the expected time to detect ozone recovery by a similar amount (from ∼2030 to ∼2045). If the long-term changes in ozone are instead fit by a piecewise-linear trend rather than by stratospheric chlorine loading, then the strong decrease of northern middle- and high-latitude ozone during the first half of the 1990s and its subsequent increase in the second half of the 1990s projects more strongly on the trend and makes a smaller contribution to the noise. This both increases the trend and weakens the LRC behavior at these latitudes, to the extent that ozone recovery (according to this model, and in the sense of a statistically significant ozone increase) is already on the verge of being detected. The implications of this rather controversial interpretation are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neurovascular coupling in response to stimulation of the rat barrel cortex was investigated using concurrent multichannel electrophysiology and laser Doppler flowmetry. The data were used to build a linear dynamic model relating neural activity to blood flow. Local field potential time series were subject to current source density analysis, and the time series of a layer IV sink of the barrel cortex was used as the input to the model. The model output was the time series of the changes in regional cerebral blood flow (CBF). We show that this model can provide excellent fit of the CBF responses for stimulus durations of up to 16 s. The structure of the model consisted of two coupled components representing vascular dilation and constriction. The complex temporal characteristics of the CBF time series were reproduced by the relatively simple balance of these two components. We show that the impulse response obtained under the 16-s duration stimulation condition generalised to provide a good prediction to the data from the shorter duration stimulation conditions. Furthermore, by optimising three out of the total of nine model parameters, the variability in the data can be well accounted for over a wide range of stimulus conditions. By establishing linearity, classic system analysis methods can be used to generate and explore a range of equivalent model structures (e.g., feed-forward or feedback) to guide the experimental investigation of the control of vascular dilation and constriction following stimulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Affymetrix GeneChip arrays are widely used for transcriptomic studies in a diverse range of species. Each gene is represented on a GeneChip array by a probe- set, consisting of up to 16 probe-pairs. Signal intensities across probe- pairs within a probe-set vary in part due to different physical hybridisation characteristics of individual probes with their target labelled transcripts. We have previously developed a technique to study the transcriptomes of heterologous species based on hybridising genomic DNA (gDNA) to a GeneChip array designed for a different species, and subsequently using only those probes with good homology. Results: Here we have investigated the effects of hybridising homologous species gDNA to study the transcriptomes of species for which the arrays have been designed. Genomic DNA from Arabidopsis thaliana and rice (Oryza sativa) were hybridised to the Affymetrix Arabidopsis ATH1 and Rice Genome GeneChip arrays respectively. Probe selection based on gDNA hybridisation intensity increased the number of genes identified as significantly differentially expressed in two published studies of Arabidopsis development, and optimised the analysis of technical replicates obtained from pooled samples of RNA from rice. Conclusion: This mixed physical and bioinformatics approach can be used to optimise estimates of gene expression when using GeneChip arrays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-density oligonucleotide (oligo) arrays are a powerful tool for transcript profiling. Arrays based on GeneChip® technology are amongst the most widely used, although GeneChip® arrays are currently available for only a small number of plant and animal species. Thus, we have developed a method to improve the sensitivity of high-density oligonucleotide arrays when applied to heterologous species and tested the method by analysing the transcriptome of Brassica oleracea L., a species for which no GeneChip® array is available, using a GeneChip® array designed for Arabidopsis thaliana (L.) Heynh. Genomic DNA from B. oleracea was labelled and hybridised to the ATH1-121501 GeneChip® array. Arabidopsis thaliana probe-pairs that hybridised to the B. oleracea genomic DNA on the basis of the perfect-match (PM) probe signal were then selected for subsequent B. oleracea transcriptome analysis using a .cel file parser script to generate probe mask files. The transcriptional response of B. oleracea to a mineral nutrient (phosphorus; P) stress was quantified using probe mask files generated for a wide range of gDNA hybridisation intensity thresholds. An example probe mask file generated with a gDNA hybridisation intensity threshold of 400 removed > 68 % of the available PM probes from the analysis but retained >96 % of available A. thaliana probe-sets. Ninety-nine of these genes were then identified as significantly regulated under P stress in B. oleracea, including the homologues of P stress responsive genes in A. thaliana. Increasing the gDNA hybridisation intensity thresholds up to 500 for probe-selection increased the sensitivity of the GeneChip® array to detect regulation of gene expression in B. oleracea under P stress by up to 13-fold. Our open-source software to create probe mask files is freely available http://affymetrix.arabidopsis.info/xspecies/ webcite and may be used to facilitate transcriptomic analyses of a wide range of plant and animal species in the absence of custom arrays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

What are the microfoundations of dynamic capabilities that sustain competitive advantage in a highly volatile environment, such as a transition economy? We explore the detailed nature of these dynamic capabilities along with their antecedents by tracing the sequence of their development based on a longitudinal case study of an organization subject to an external context of radical transition — the Russian oil company, Yukos. Our rich qualitative data indicate two distinct types of dynamic capabilities that are pivotal for organizational transformation. Adaptation dynamic capabilities relate to routines of resource exploitation and deployment, which are supported by acquisition, internalization and dissemination of extant knowledge, as well as resource reconfiguration, divestment and integration. Innovation dynamic capabilities relate to the creation of completely new capabilities via exploration and path-creation processes, which are supported by search, experimentation and risk taking, as well as project selection, funding and implementation. Second, we find that sequencing the two types of dynamic capabilities, helped the organization both to secure short-term competitive advantage, and to create the basis for long-term competitive advantage. These dynamic capability constructs advance theoretical understanding of what dynamic capabilities are, whilst their sequencing explains how firms create, leverage and enhance them over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exascale systems are the next frontier in high-performance computing and are expected to deliver a performance of the order of 10^18 operations per second using massive multicore processors. Very large- and extreme-scale parallel systems pose critical algorithmic challenges, especially related to concurrency, locality and the need to avoid global communication patterns. This work investigates a novel protocol for dynamic group communication that can be used to remove the global communication requirement and to reduce the communication cost in parallel formulations of iterative data mining algorithms. The protocol is used to provide a communication-efficient parallel formulation of the k-means algorithm for cluster analysis. The approach is based on a collective communication operation for dynamic groups of processes and exploits non-uniform data distributions. Non-uniform data distributions can be either found in real-world distributed applications or induced by means of multidimensional binary search trees. The analysis of the proposed dynamic group communication protocol has shown that it does not introduce significant communication overhead. The parallel clustering algorithm has also been extended to accommodate an approximation error, which allows a further reduction of the communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing elements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We perform simulations of several convective events over the southern UK with the Met Office Unified Model (UM) at horizontal grid lengths ranging from 1.5 km to 200 m. Comparing the simulated storms on these days with the Met Office rainfall radar network allows us to apply a statistical approach to evaluate the properties and evolution of the simulated storms over a range of conditions. Here we present results comparing the storm morphology in the model and reality which show that the simulated storms become smaller as grid length decreases and that the grid length that fits the observations best changes with the size of the observed cells. We investigate the sensitivity of storm morphology in the model to the mixing length used in the subgrid turbulence scheme. As the subgrid mixing length is decreased, the number of small storms with high area-averaged rain rates increases. We show that by changing the mixing length we can produce a lower resolution simulation that produces similar morphologies to a higher resolution simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Autism Spectrum Disorder (ASD) is diagnosed on the basis of behavioral symptoms, but cognitive abilities may also be useful in characterizing individuals with ASD. One hundred seventy-eight high-functioning male adults, half with ASD and half without, completed tasks assessing IQ, a broad range of cognitive skills, and autistic and comorbid symptomatology. The aims of the study were, first, to determine whether significant differences existed between cases and controls on cognitive tasks, and whether cognitive profiles, derived using a multivariate classification method with data from multiple cognitive tasks, could distinguish between the two groups. Second, to establish whether cognitive skill level was correlated with degree of autistic symptom severity, and third, whether cognitive skill level was correlated with degree of comorbid psychopathology. Fourth, cognitive characteristics of individuals with Asperger Syndrome (AS) and high-functioning autism (HFA) were compared. After controlling for IQ, ASD and control groups scored significantly differently on tasks of social cognition, motor performance, and executive function (P's < 0.05). To investigate cognitive profiles, 12 variables were entered into a support vector machine (SVM), which achieved good classification accuracy (81%) at a level significantly better than chance (P < 0.0001). After correcting for multiple correlations, there were no significant associations between cognitive performance and severity of either autistic or comorbid symptomatology. There were no significant differences between AS and HFA groups on the cognitive tasks. Cognitive classification models could be a useful aid to the diagnostic process when used in conjunction with other data sources-including clinical history.