961 resultados para transitive calibration


Relevância:

10.00% 10.00%

Publicador:

Resumo:

B. cereus is one of the most frequent occurring bacteria in foods . It produces several heat-labile enterotoxins and one stable non-protein toxin, cereulide (emetic), which may be pre-formed in food. Cereulide is a heat stable peptide whose structure and mechanism of action were in the past decade elucidated. Until this work, the detection of cereulide was done by biological assays. With my mentors, I developed the first quantitative chemical assay for cereulide. The assay is based on liquid chromatography (HPLC) combined with ion trap mass spectrometry and the calibration is done with valinomycin and purified cereulide. To detect and quantitate valinomycin and cereulide, their [NH4+] adducts, m/z 1128.9 and m/z 1171 respectively, were used. This was a breakthrough in the cereulide research and became a very powerful tool of investigation. This tool made it possible to prove for the first time that the toxin produced by B. cereus in heat-treated food caused human illness. Until this thesis work (Paper II), cereulide producing B. cereus strains were believed to represent a homogenous group of clonal strains. The cereulide producing strains investigated in those studies originated mostly from food poisoning incidents. We used strains of many origins and analyzed them using a polyphasic approach. We found that the cereulide producing B. cereus strains are genetically and biologically more diverse than assumed in earlier studies. The strains diverge in the adenylate kinase (adk) gene (two sequence types), in ribopatterns obtained with EcoRI and PvuII (three patterns), tyrosin decomposition, haemolysis and lecithine hydrolysis (two phenotypes). Our study was the first demonstration of diversity within the cereulide producing strains of B. cereus. To manage the risk for cereulide production in food, understanding is needed on factors that may upregulate cereulide production in a given food matrix and the environmental factors affecting it. As a contribution towards this direction, we adjusted the growth environment and measured the cereulide production by strains selected for diversity. The temperature range where cereulide is produced was narrower than that for growth for most of the producer strains. Most cereulide was by most strains produced at room temperature (20 - 23ºC). Exceptions to this were two faecal isolates which produced the same amount of cereulide from 23 ºC up until 39ºC. We also found that at 37º C the choice of growth media for cereulide production differed from that at the room temperature. The food composition and temperature may thus be a key for understanding cereulide production in foods as well as in the gut. We investigated the contents of [K+], [Na+] and amino acids of six growth media. Statistical evaluation indicated a significant positive correlation between the ratio [K+]:[Na+] and the production of cereulide, but only when the concentrations of glycine and [Na+] were constant. Of the amino acids only glycine correlated positively with high cereulide production. Glycine is used worldwide as food additive (E 640), flavor modifier, humectant, acidity regulator, and is permitted in the European Union countries, with no regulatory quantitative limitation, in most types of foods. B. subtilis group members are endospore-forming bacteria ubiquitous in the environment, similar to B. cereus in this respect. Bacillus species other than B. cereus have only sporadically been identified as causative agents of food-borne illnesses. We found (Paper IV) that food-borne isolates of B. subtilis and B. mojavensis produced amylosin. It is possible that amylosin was the agent responsible for the food-borne illness, since no other toxic substance was found in the strains. This is the first report on amylosin production by strains isolated from food. We found that the temperature requirement for amylosin production was higher for the B. subtilis strain F 2564/96, a mesophilic producer, than for B. mojavensis strains eela 2293 and B 31, psychrotolerant producers. We also found that an atmosphere with low oxygen did not prevent the production of amylosin. Ready-to-eat foods packaged in micro-aerophilic atmosphere and/or stored at temperatures above 10 °C, may thus pose a risk when toxigenic strains of B. subtilis or B. mojavensis are present.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydrogen cyanide (HCN) is a toxic chemical that can potentially cause mild to severe reactions in animals when grazing forage sorghum. Developing technologies to monitor the level of HCN in the growing crop would benefit graziers, so that they can move cattle into paddocks with acceptable levels of HCN. In this study, we developed near-infrared spectroscopy (MRS) calibrations to estimate HCN in forage sorghum and hay. The full spectral NIRS range (400-2498 nm) was used as well as specific spectral ranges within the full spectral range, i.e., visible (400-750 nm), shortwave (800-1100 nm) and near-infrared (NIR) (1100-2498 nm). Using the full spectrum approach and partial least-squares (PLS), the calibration produced a coefficient of determination (R-2) = 0.838 and standard error of cross-validation (SECV) = 0.040%, while the validation set had a R-2 = 0.824 with a low standard error of prediction (SEP = 0.047%). When using a multiple linear regression (MLR) approach, the best model (NIR spectra) produced a R-2 = 0.847 and standard error of calibration (SEC) = 0.050% and a R-2 = 0.829 and SEP = 0.057% for the validation set. The MLR models built from these spectral regions all used nine wavelengths. Two specific wavelengths 2034 and 2458 nm were of interest, with the former associated with C=O carbonyl stretch and the latter associated with C-N-C stretching. The most accurate PLS and MLR models produced a ratio of standard error of prediction to standard deviation of 3.4 and 3.0, respectively, suggesting that the calibrations could be used for screening breeding material. The results indicated that it should be feasible to develop calibrations using PLS or MLR models for a number of users, including breeding programs to screen for genotypes with low HCN, as well as graziers to monitor crop status to help with grazing efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis consists of two parts; in the first part we performed a single-molecule force extension measurement with 10kb long DNA-molecules from phage-λ to validate the calibration and single-molecule capability of our optical tweezers instrument. Fitting the worm-like chain interpolation formula to the data revealed that ca. 71% of the DNA tethers featured a contour length within ±15% of the expected value (3.38 µm). Only 25% of the found DNA had a persistence length between 30 and 60 nm. The correct value should be within 40 to 60 nm. In the second part we designed and built a precise temperature controller to remove thermal fluctuations that cause drifting of the optical trap. The controller uses feed-forward and PID (proportional-integral-derivative) feedback to achieve 1.58 mK precision and 0.3 K absolute accuracy. During a 5 min test run it reduced drifting of the trap from 1.4 nm/min in open-loop to 0.6 nm/min in closed-loop.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aims We combine measurements of weak gravitational lensing from the CFHTLS-Wide survey, supernovae Ia from CFHT SNLS and CMB anisotropies from WMAP5 to obtain joint constraints on cosmological parameters, in particular, the dark-energy equation-of-state parameter w. We assess the influence of systematics in the data on the results and look for possible correlations with cosmological parameters. Methods We implemented an MCMC algorithm to sample the parameter space of a flat CDM model with a dark-energy component of constant w. Systematics in the data are parametrised and included in the analysis. We determine the influence of photometric calibration of SNIa data on cosmological results by calculating the response of the distance modulus to photometric zero-point variations. The weak lensing data set is tested for anomalous field-to-field variations and a systematic shape measurement bias for high-redshift galaxies. Results Ignoring photometric uncertainties for SNLS biases cosmological parameters by at most 20% of the statistical errors, using supernovae alone; the parameter uncertainties are underestimated by 10%. The weak-lensing field-to-field variance between 1 deg2-MegaCam pointings is 5-15% higher than predicted from N-body simulations. We find no bias in the lensing signal at high redshift, within the framework of a simple model, and marginalising over cosmological parameters. Assuming a systematic underestimation of the lensing signal, the normalisation increases by up to 8%. Combining all three probes we obtain -0.10 < 1 + w < 0.06 at 68% confidence ( -0.18 < 1 + w < 0.12 at 95%), including systematic errors. Our results are therefore consistent with the cosmological constant . Systematics in the data increase the error bars by up to 35%; the best-fit values change by less than 0.15.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: In order to rapidly and efficiently screen potential biofuel feedstock candidates for quintessential traits, robust high-throughput analytical techniques must be developed and honed. The traditional methods of measuring lignin syringyl/guaiacyl (S/G) ratio can be laborious, involve hazardous reagents, and/or be destructive. Vibrational spectroscopy can furnish high-throughput instrumentation without the limitations of the traditional techniques. Spectral data from mid-infrared, near-infrared, and Raman spectroscopies was combined with S/G ratios, obtained using pyrolysis molecular beam mass spectrometry, from 245 different eucalypt and Acacia trees across 17 species. Iterations of spectral processing allowed the assembly of robust predictive models using partial least squares (PLS). RESULTS: The PLS models were rigorously evaluated using three different randomly generated calibration and validation sets for each spectral processing approach. Root mean standard errors of prediction for validation sets were lowest for models comprised of Raman (0.13 to 0.16) and mid-infrared (0.13 to 0.15) spectral data, while near-infrared spectroscopy led to more erroneous predictions (0.18 to 0.21). Correlation coefficients (r) for the validation sets followed a similar pattern: Raman (0.89 to 0.91), mid-infrared (0.87 to 0.91), and near-infrared (0.79 to 0.82). These statistics signify that Raman and mid-infrared spectroscopy led to the most accurate predictions of S/G ratio in a diverse consortium of feedstocks. CONCLUSION: Eucalypts present an attractive option for biofuel and biochemical production. Given the assortment of over 900 different species of Eucalyptus and Corymbia, in addition to various species of Acacia, it is necessary to isolate those possessing ideal biofuel traits. This research has demonstrated the validity of vibrational spectroscopy to efficiently partition different potential biofuel feedstocks according to lignin S/G ratio, significantly reducing experiment and analysis time and expense while providing non-destructive, accurate, global, predictive models encompassing a diverse array of feedstocks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent epidemiological studies have shown a consistent association of the mass concentration of urban air thoracic (PM10) and fine (PM2.5) particles with mortality and morbidity among cardiorespiratory patients. However, the chemical characteristics of different particulate size ranges and the biological mechanisms responsible for these adverse health effects are not well known. The principal aims of this thesis were to validate a high volume cascade impactor (HVCI) for the collection of particulate matter for physicochemical and toxicological studies, and to make an in-depth chemical and source characterisation of samples collected during different pollution situations. The particulate samples were collected with the HVCI, virtual impactors and a Berner low pressure impactor in six European cities: Helsinki, Duisburg, Prague, Amsterdam, Barcelona and Athens. The samples were analysed for particle mass, common ions, total and water-soluble elements as well as elemental and organic carbon. Laboratory calibration and field comparisons indicated that the HVCI can provide a unique large capacity, high efficiency sampling of size-segregated aerosol particles. The cutoff sizes of the recommended HVCI configuration were 2.4, 0.9 and 0.2 μm. The HVCI mass concentrations were in a good agreement with the reference methods, but the chemical composition of especially the fine particulate samples showed some differences. This implies that the chemical characterization of the exposure variable in toxicological studies needs to be done from the same HVCI samples as used in cell and animal studies. The data from parallel, low volume reference samplers provide valuable additional information for chemical mass closure and source assessment. The major components of PM2.5 in the virtual impactor samples were carbonaceous compounds, secondary inorganic ions and sea salt, whereas those of coarse particles (PM2.5-10) were soil-derived compounds, carbonaceous compounds, sea salt and nitrate. The major and minor components together accounted for 77-106% and 77-96% of the gravimetrically-measured masses of fine and coarse particles, respectively. Relatively large differences between sampling campaigns were observed in the organic carbon content of the PM2.5 samples as well as the mineral composition of the PM2.5-10 samples. A source assessment based on chemical tracers suggested clear differences in the dominant sources (e.g. traffic, residential heating with solid fuels, metal industry plants, regional or long-range transport) between the sampling campaigns. In summary, the field campaigns exhibited different profiles with regard to particulate sources, size distribution and chemical composition, thus, providing a highly useful setup for toxicological studies on the size-segregated HVCI samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rarely is it possible to obtain absolute numbers in free-ranging populations and although various direct and indirect methods are used to estimate abundance, few are validated against populations of known size. In this paper, we apply grounding, calibration and verification methods, used to validate mathematical models, to methods of estimating relative abundance. To illustrate how this might be done, we consider and evaluate the widely applied passive tracking index (PTI) methodology. Using published data, we examine the rationality of PTI methodology, how conceptually animal activity and abundance are related and how alternative methods are subject to similar biases or produce similar abundance estimates and trends. We then attune the method against populations representing a range of densities likely to be encountered in the field. Finally, we compare PTI trends against a prediction that adjacent populations of the same species will have similar abundance values and trends in activity. We show that while PTI abundance estimates are subject to environmental and behavioural stochasticity peculiar to each species, the PTI method and associated variance estimate showed high probability of detection, high precision of abundance values and, generally, low variability between surveys, and suggest that the PTI method applied using this procedure and for these species provides a sensitive and credible index of abundance. This same or similar validation approach can and should be applied to alternative relative abundance methods in order to demonstrate their credibility and justify their use.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Masonry under compression is affected by the properties of its constituents and their interfaces. In spite of extensive investigations of the behaviour of masonry under compression, the information in the literature cannot be regarded as comprehensive due to ongoing inventions of new generation products – for example, polymer modified thin layer mortared masonry and drystack masonry. As comprehensive experimental studies are very expensive, an analytical model inspired by damage mechanics is developed and applied to the prediction of the compressive behaviour of masonry in this paper. The model incorporates a parabolic progressively softening stress-strain curve for the units and a progressively stiffening stress-strain curve until a threshold strain for the combined mortar and the unit-mortar interfaces is reached. The model simulates the mutual constraints imposed by each of these constituents through their respective tensile and compressive behaviour and volumetric changes. The advantage of the model is that it requires only the properties of the constituents and considers masonry as a continuum and computes the average properties of the composite masonry prisms/wallettes; it does not require discretisation of prism or wallette similar to the finite element methods. The capability of the model in capturing the phenomenological behaviour of masonry with appropriate elastic response, stiffness degradation and post peak softening is presented through numerical examples. The fitting of the experimental data to the model parameters is demonstrated through calibration of some selected test data on units and mortar from the literature; the calibrated model is shown to predict the responses of the experimentally determined masonry built using the corresponding units and mortar quite well. Through a series of sensitivity studies, the model is also shown to predict the masonry strength appropriately for changes to the properties of the units and mortar, the mortar joint thickness and the ratio of the height of unit to mortar joint thickness. The unit strength is shown to affect the masonry strength significantly. Although the mortar strength has only a marginal effect, reduction in mortar joint thickness is shown to have a profound effect on the masonry strength. The results obtained from the model are compared with the various provisions in the Australian Masonry Structures Standard AS3700 (2011) and Eurocode 6.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Comprehensive two-dimensional gas chromatography (GC×GC) offers enhanced separation efficiency, reliability in qualitative and quantitative analysis, capability to detect low quantities, and information on the whole sample and its components. These features are essential in the analysis of complex samples, in which the number of compounds may be large or the analytes of interest are present at trace level. This study involved the development of instrumentation, data analysis programs and methodologies for GC×GC and their application in studies on qualitative and quantitative aspects of GC×GC analysis. Environmental samples were used as model samples. Instrumental development comprised the construction of three versions of a semi-rotating cryogenic modulator in which modulation was based on two-step cryogenic trapping with continuously flowing carbon dioxide as coolant. Two-step trapping was achieved by rotating the nozzle spraying the carbon dioxide with a motor. The fastest rotation and highest modulation frequency were achieved with a permanent magnetic motor, and modulation was most accurate when the motor was controlled with a microcontroller containing a quartz crystal. Heated wire resistors were unnecessary for the desorption step when liquid carbon dioxide was used as coolant. With use of the modulators developed in this study, the narrowest peaks were 75 ms at base. Three data analysis programs were developed allowing basic, comparison and identification operations. Basic operations enabled the visualisation of two-dimensional plots and the determination of retention times, peak heights and volumes. The overlaying feature in the comparison program allowed easy comparison of 2D plots. An automated identification procedure based on mass spectra and retention parameters allowed the qualitative analysis of data obtained by GC×GC and time-of-flight mass spectrometry. In the methodological development, sample preparation (extraction and clean-up) and GC×GC methods were developed for the analysis of atmospheric aerosol and sediment samples. Dynamic sonication assisted extraction was well suited for atmospheric aerosols collected on a filter. A clean-up procedure utilising normal phase liquid chromatography with ultra violet detection worked well in the removal of aliphatic hydrocarbons from a sediment extract. GC×GC with flame ionisation detection or quadrupole mass spectrometry provided good reliability in the qualitative analysis of target analytes. However, GC×GC with time-of-flight mass spectrometry was needed in the analysis of unknowns. The automated identification procedure that was developed was efficient in the analysis of large data files, but manual search and analyst knowledge are invaluable as well. Quantitative analysis was examined in terms of calibration procedures and the effect of matrix compounds on GC×GC separation. In addition to calibration in GC×GC with summed peak areas or peak volumes, simplified area calibration based on normal GC signal can be used to quantify compounds in samples analysed by GC×GC so long as certain qualitative and quantitative prerequisites are met. In a study of the effect of matrix compounds on GC×GC separation, it was shown that quality of the separation of PAHs is not significantly disturbed by the amount of matrix and quantitativeness suffers only slightly in the presence of matrix and when the amount of target compounds is low. The benefits of GC×GC in the analysis of complex samples easily overcome some minor drawbacks of the technique. The developed instrumentation and methodologies performed well for environmental samples, but they could also be applied for other complex samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multi- and intralake datasets of fossil midge assemblages in surface sediments of small shallow lakes in Finland were studied to determine the most important environmental factors explaining trends in midge distribution and abundance. The aim was to develop palaeoenvironmental calibration models for the most important environmental variables for the purpose of reconstructing past environmental conditions. The developed models were applied to three high-resolution fossil midge stratigraphies from southern and eastern Finland to interpret environmental variability over the past 2000 years, with special focus on the Medieval Climate Anomaly (MCA), the Little Ice Age (LIA) and recent anthropogenic changes. The midge-based results were compared with physical properties of the sediment, historical evidence and environmental reconstructions based on diatoms (Bacillariophyta), cladocerans (Crustacea: Cladocera) and tree rings. The results showed that the most important environmental factor controlling midge distribution and abundance along a latitudinal gradient in Finland was the mean July air temperature (TJul). However, when the dataset was environmentally screened to include only pristine lakes, water depth at the sampling site became more important. Furthermore, when the dataset was geographically scaled to southern Finland, hypolimnetic oxygen conditions became the dominant environmental factor. The results from an intralake dataset from eastern Finland showed that the most important environmental factors controlling midge distribution within a lake basin were river contribution, water depth and submerged vegetation patterns. In addition, the results of the intralake dataset showed that the fossil midge assemblages represent fauna that lived in close proximity to the sampling sites, thus enabling the exploration of within-lake gradients in midge assemblages. Importantly, this within-lake heterogeneity in midge assemblages may have effects on midge-based temperature estimations, because samples taken from the deepest point of a lake basin may infer considerably colder temperatures than expected, as shown by the present test results. Therefore, it is suggested here that the samples in fossil midge studies involving shallow boreal lakes should be taken from the sublittoral, where the assemblages are most representative of the whole lake fauna. Transfer functions between midge assemblages and the environmental forcing factors that were significantly related with the assemblages, including mean air TJul, water depth, hypolimnetic oxygen, stream flow and distance to littoral vegetation, were developed using weighted averaging (WA) and weighted averaging-partial least squares (WA-PLS) techniques, which outperformed all the other tested numerical approaches. Application of the models in downcore studies showed mostly consistent trends. Based on the present results, which agreed with previous studies and historical evidence, the Medieval Climate Anomaly between ca. 800 and 1300 AD in eastern Finland was characterized by warm temperature conditions and dry summers, but probably humid winters. The Little Ice Age (LIA) prevailed in southern Finland from ca. 1550 to 1850 AD, with the coldest conditions occurring at ca. 1700 AD, whereas in eastern Finland the cold conditions prevailed over a longer time period, from ca. 1300 until 1900 AD. The recent climatic warming was clearly represented in all of the temperature reconstructions. In the terms of long-term climatology, the present results provide support for the concept that the North Atlantic Oscillation (NAO) index has a positive correlation with winter precipitation and annual temperature and a negative correlation with summer precipitation in eastern Finland. In general, the results indicate a relatively warm climate with dry summers but snowy winters during the MCA and a cool climate with rainy summers and dry winters during the LIA. The results of the present reconstructions and the forthcoming applications of the models can be used in assessments of long-term environmental dynamics to refine the understanding of past environmental reference conditions and natural variability required by environmental scientists, ecologists and policy makers to make decisions concerning the presently occurring global, regional and local changes. The developed midge-based models for temperature, hypolimnetic oxygen, water depth, littoral vegetation shift and stream flow, presented in this thesis, are open for scientific use on request.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper both documentary and natural proxy data have been used to improve the accuracy of palaeoclimatic knowledge in Finland since the 18th century. Early meteorological observations from Turku (1748-1800) were analyzed first as a potential source of climate variability. The reliability of the calculated mean temperatures was evaluated by comparing them with those of contemporary temperature records from Stockholm, St. Petersburg and Uppsala. The resulting monthly, seasonal and yearly mean temperatures from 1748 to 1800 were compared with the present day mean values (1961-1990): the comparison suggests that the winters of the period 1749-1800 were 0.8 ºC colder than today, while the summers were 0.4 ºC warmer. Over the same period, springs were 0.9 ºC and autumns 0.1 ºC colder than today. Despite their uncertainties when compared with modern meteorological data, early temperature measurements offer direct and daily information about the weather for all months of the year, in contrast with other proxies. Secondly, early meteorological observations from Tornio (1737-1749) and Ylitornio (1792-1838) were used to study the temporal behaviour of the climate-tree growth relationship during the past three centuries in northern Finland. Analyses showed that the correlations between ring widths and mid-summer (July) temperatures did not vary significantly as a function of time. Early (June) and late summer (August) mean temperatures were secondary to mid-summer temperatures in controlling the radial growth. According the dataset used, there was no clear signature of temporally reduced sensitivity of Scots pine ring widths to mid-summer temperatures over the periods of early and modern meteorological observations. Thirdly, plant phenological data with tree-rings from south-west Finland since 1750 were examined as a palaeoclimate indicator. The information from the fragmentary, partly overlapping, partly nonsystematically biased plant phenological records of 14 different phenomena were combined into one continuous time series of phenological indices. The indices were found to be reliable indicators of the February to June temperature variations. In contrast, there was no correlation between the phenological indices and the precipitation data. Moreover, the correlations between the studied tree-rings and spring temperatures varied as a function of time and hence, their use in palaeoclimate reconstruction is questionable. The use of present tree-ring datasets for palaeoclimate purposes may become possible after the application of more sophisticated calibration methods. Climate variability since the 18th century is perhaps best seen in the fourth paper study of the multiproxy spring temperature reconstruction of south-west Finland. With the help of transfer functions, an attempt has been made to utilize both documentary and natural proxies. The reconstruction was verified with statistics showing a high degree of validity between the reconstructed and observed temperatures. According to the proxies and modern meteorological observations from Turku, springs have become warmer and have featured a warming trend since around the 1850s. Over the period of 1750 to around 1850, springs featured larger multidecadal low-frequency variability, as well as a smaller range of annual temperature variations. The coldest springtimes occurred around the 1840s and 1850s and the first decade of the 19th century. Particularly warm periods occurred in the 1760s, 1790s, 1820s, 1930s, 1970s and from 1987 onwards, although in this period cold springs occurred, such as the springs of 1994 and 1996. On the basis of the available material, long-term temperature changes have been related to changes in the atmospheric circulation, such as the North Atlantic Oscillation (February-June).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Remote sensing provides methods to infer land cover information over large geographical areas at a variety of spatial and temporal resolutions. Land cover is input data for a range of environmental models and information on land cover dynamics is required for monitoring the implications of global change. Such data are also essential in support of environmental management and policymaking. Boreal forests are a key component of the global climate and a major sink of carbon. The northern latitudes are expected to experience a disproportionate and rapid warming, which can have a major impact on vegetation at forest limits. This thesis examines the use of optical remote sensing for estimating aboveground biomass, leaf area index (LAI), tree cover and tree height in the boreal forests and tundra taiga transition zone in Finland. The continuous fields of forest attributes are required, for example, to improve the mapping of forest extent. The thesis focus on studying the feasibility of satellite data at multiple spatial resolutions, assessing the potential of multispectral, -angular and -temporal information, and provides regional evaluation for global land cover data. Preprocessed ASTER, MISR and MODIS products are the principal satellite data. The reference data consist of field measurements, forest inventory data and fine resolution land cover maps. Fine resolution studies demonstrate how statistical relationships between biomass and satellite data are relatively strong in single species and low biomass mountain birch forests in comparison to higher biomass coniferous stands. The combination of forest stand data and fine resolution ASTER images provides a method for biomass estimation using medium resolution MODIS data. The multiangular data improve the accuracy of land cover mapping in the sparsely forested tundra taiga transition zone, particularly in mires. Similarly, multitemporal data improve the accuracy of coarse resolution tree cover estimates in comparison to single date data. Furthermore, the peak of the growing season is not necessarily the optimal time for land cover mapping in the northern boreal regions. The evaluated coarse resolution land cover data sets have considerable shortcomings in northernmost Finland and should be used with caution in similar regions. The quantitative reference data and upscaling methods for integrating multiresolution data are required for calibration of statistical models and evaluation of land cover data sets. The preprocessed image products have potential for wider use as they can considerably reduce the time and effort used for data processing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Assessing the impacts of climate variability on agricultural productivity at regional, national or global scale is essential for defining adaptation and mitigation strategies. We explore in this study the potential changes in spring wheat yields at Swift Current and Melfort, Canada, for different sowing windows under projected climate scenarios (i.e., the representative concentration pathways, RCP4.5 and RCP8.5). First, the APSIM model was calibrated and evaluated at the study sites using data from long term experimental field plots. Then, the impacts of change in sowing dates on final yield were assessed over the 2030-2099 period with a 1990-2009 baseline period of observed yield data, assuming that other crop management practices remained unchanged. Results showed that the performance of APSIM was quite satisfactory with an index of agreement of 0.80, R2 of 0.54, and mean absolute error (MAE) and root mean square error (RMSE) of 529 kg/ha and 1023 kg/ha, respectively (MAE = 476 kg/ha and RMSE = 684 kg/ha in calibration phase). Under the projected climate conditions, a general trend in yield loss was observed regardless of the sowing window, with a range from -24 to -94 depending on the site and the RCP, and noticeable losses during the 2060s and beyond (increasing CO2 effects being excluded). Smallest yield losses obtained through earlier possible sowing date (i.e., mid-April) under the projected future climate suggested that this option might be explored for mitigating possible adverse impacts of climate variability. Our findings could therefore serve as a basis for using APSIM as a decision support tool for adaptation/mitigation options under potential climate variability within Western Canada.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently established moderate size free piston driven hypersonic shock tunnel HST3 along with its calibration is described here. The extreme thermodynamic conditions prevalent behind the reflected shock wave have been utilized to study the catalytic and non-catalytic reactions of shock heated test gases like Ar, N2 or O2 with different material like C60 carbon, zirconia and ceria substituted zirconia. The exposed test samples are investigated using different experimental methods. These studies show the formation of carbon nitride due to the non-catalytic interaction of shock heated nitrogen gas with C60 carbon film. On the other hand, the ZrO2 undergoes only phase transformation from cubic to monoclinic structure and Ce0.5Zr0.5O2 in fluorite cubic phase changes to pyrochlore (Ce2Zr2O7±δ) phase by releasing oxygen from the lattice due to heterogeneous catalytic surface reaction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The unconfined aquifer of the Continental Terminal in Niger was investigated by magnetic resonance sounding (MRS) and by 14 pumping tests in order to improve calibration of MRS outputs at field scale. The reliability of the standard relationship used for estimating aquifer transmissivity by MRS was checked; it was found that the parametric factor can be estimated with an uncertainty a parts per thousand currency sign150% by a single point of calibration. The MRS water content (theta (MRS)) was shown to be positively correlated with the specific yield (Sy), and theta (MRS) always displayed higher values than Sy. A conceptual model was subsequently developed, based on estimated changes of the total porosity, Sy, and the specific retention Sr as a function of the median grain size. The resulting relationship between theta (MRS) and Sy showed a reasonably good fit with the experimental dataset, considering the inherent heterogeneity of the aquifer matrix (residual error is similar to 60%). Interpreted in terms of aquifer parameters, MRS data suggest a log-normal distribution of the permeability and a one-sided Gaussian distribution of Sy. These results demonstrate the efficiency of the MRS method for fast and low-cost prospection of hydraulic parameters for large unconfined aquifers.