994 resultados para Analysis, Aerosols, Atmosphere, Amines
Thermal decomposition of solid state compounds of lanthanide and yttrium benzoates in CO2 atmosphere
Resumo:
Solid-state Ln-Bz compounds, where Ln stands for trivalent lanthanides and Bz is benzoate have been synthesized. Simultaneous thermogravimetric and differential thermal analysis in a CO2 atmosphere were used to study the thermal decomposition of these compounds.
Resumo:
PIXE (Particle Induce X-ray Emission spectrometry) was used for analysing stem bark and stem wood of Scots pine, Norway spruce and Silver birch. Thick samples were irradiated, in laboratory atmosphere, with 3 MeV protons and the beam current was measured indirectly using a photo multiplicator (PM) tube. Both point scans and bulk analyses were performed with the 1 mm diameter proton beam. In bulk analyses, whole bark and sectors of discs of the stem wood were dry ashed at 550 ˚C. The ashes were homogenised by shaking and prepared to target pellets for PIXE analyses. This procedure generated representative samples to be analysed, but the enrichment also enabled quantification of some additional trace elements. The ash contents obtained as a product of the sample preparation procedure also showed to be of great importance in the evaluation of results in environmental studies. Spot scans from the pith of pine wood outwards, showed clearly highest concentrations of manganese, calcium and zinc in the first spot irradiated, or 2-3 times higher than in the surrounding wood. For stem wood from the crown part of a pine this higher concentration level was found in the first four spots/mms, including the pith and the two following growth rings. Zinc showed increasing concentrations outwards in sapwood of the pine stem, with the over-all lowest concentrations in the inner half of the sapwood. This could indicate emigration of this element from sapwood being under transformation to heartwood. Point scans across sapwood of pine and spruce showed more distinct variations in concentrations relative to hearth wood. Higher concentrations of e.g. zinc, calcium and manganese were found in earlywood than in denser latewood. Very high concentrations of iron and copper were also seen for some earlywood increments. The ash content of stem bark is up to and order higher than for the stem wood. However, when the elemental concentration in ashes of bark and wood of the same disc were compared, these are very similar – this when trees are growing at spots with no anthropogenic contamination from the atmosphere. The largest difference was obtained for calcium which appeared at two times high concentrations in ashes of bark than in ashes of the wood (ratio of 2). Pine bark is often used in monitoring of atmospheric pollution, where concentrations in bark samples are compared. Here an alternative approach is suggested: Bark and the underlying stem wood of a pine trees are dry ashed and analysed. The elemental concentration in the bark ash is then compared to the concentration of the same element in the wood ash. Comparing bark to wood includes a normalisation for the varying availability of an element from the soil at different sites. When this comparison is done for the ashes of the materials, a normalisation is also obtained for the general and locally different enrichment of inorganic elements from wood to bark. Already a ratio >2 between the concentration in the bark ash and the concentration in the wood ash could indicate atmospheric pollution. For monitoring where bark is used, this way of “inwards” comparison is suggested - instead of comparing to results from analyses of bark from other trees (read reference areas), growing at sites with different soil and, locally, different climate conditions. This approach also enables evaluation of atmospheric pollution from sampling of only relative few individual trees –preferable during forest felling.
Resumo:
The sensory quality of 'Douradão' peaches cold stored in three different conditions of controlled atmosphere (CA1, CA2, CA3 and Control) was studied. After 14, 21 and 28 days of cold storage, samples were withdrawn from CA and kept for 4 days in ambient air for ripening. The sensory profile of the peaches and the descriptive terminology were developed by methodology based on the Quantitative Descriptive Analysis (QDA). The panelists consensually defined the sensory descriptors, their respective reference materials and the descriptive evaluation ballot. Fourteen panelists were selected based on their discrimination capacity and reproducibility. Seven descriptors were generated showing similarities and differences between samples. The data were analyzed by ANOVA, Tukey test and Principal Component Analysis (PCA). Results showed significant differences in the sensory profiles of the peaches. The PCA showed that CA2 and CA3 treatments were more characterized by the fresh peach flavor, fresh peach appearance, juiciness and flesh firmness, and were effective in keeping the good quality of the 'Douradão' peaches during the 28 days of cold storage. The Control and CA1 treatments were characterized by the mealiness and were ineffective for quality maintenance of the fruits during cold storage.
Resumo:
In this study, it was evaluated the quality of yellow passion fruits stored under refrigeration and controlled atmospheres of different composition aiming to extend the postharvest life of the fruits. The characteristics of skin color, appearance, mass loss, as well as the chemical quality of the juice of yellow passion fruits stored at: 21% O2 plus 0.03% CO2; 1% O2 plus 0.03% CO2; 5% O2 plus 0.03% CO2; 12% O2 plus 5% CO2; and 5% O2 plus 15% CO2, with 1 control treatment (refrigeration at 13 ºC and 90% UR) were determined. The analyses were performed before and after 30 days of storage and after removing the controlled atmospheres and storage for 9 days under refrigeration at ambient atmosphere. The data were interpreted by simple statistical analysis using the test by confidence intervals with 95% of probability. It was concluded that the application of atmospheres with low oxygen concentration and high carbon dioxide level minimized quality losses. At atmosphere with 5% O2 and 15% CO2, it was observed the lowest color change indexes and mass loss, and also the smallest decrease in acidity, soluble solids content, vitamin C, reducing sugars, and total soluble sugars.
Resumo:
Stability of minimally processed radicchio (Cichorium intybus L.) was evaluated under modified atmosphere (2% O2, 5% CO2, and 93% N2) on 3, 5, 7 and 10 days of storage at 5°C. The samples were hygienized in sodium hypochlorite or hydrogen peroxide solutions to identify the most effective sanitizing solution to remove microorganisms. Microbiological analysis was conducted to identify the presence of coliforms at 35°C and 45°C, mesophilic microorganisms, and yeast and mold. Physicochemical analyses of mass loss, pH, soluble solids, and total acidity were conducted. The color measurements were performed using a Portable Colorimeter model CR-400. The antioxidant activity was determined by 2,2-diphenyl-1-picrylhydrazyl and 2,2-azino-bis-3-ethylbenzothiazoline-6-sulfonic methods. The sensory evaluation was carried out using a hedonic scale to test overall acceptance of the samples during storage. The sodium hypochlorite (150 mg.L-1) solution provided greater safety to the final product. The values of pH ranged from 6.17 to 6.25, total acidity from 0.405 to 0.435%, soluble solids from 0.5 to 0.6 °Brix, mass loss from 1.7 to 7.2%, and chlorophyll from 1.068 to 0.854 mg/100g. The antioxidant activity of radicchio did not show significant changes during the first 3 days of storage. The overall acceptance of the sample stored in the sealed package without modified atmosphere was 70%, while the fresh sample was obtained 77% of approval. Although the samples packaged under modified atmosphere had a higher acceptance score, the samples in sealed packages had satisfactory results during the nine days of storage. The use of modified atmosphere, combined with cooling and good manufacturing practices, was sufficient to prolong the life of minimally processed radicchio, Folha Larga cultivar, for up to ten days of storage.
Resumo:
A simple, low-cost concentric capillary nebulizer (CCN) was developed and evaluated for ICP spectrometry. The CCN could be operated at sample uptake rates of 0.050-1.00 ml min'^ and under oscillating and non-oscillating conditions. Aerosol characteristics for the CCN were studied using a laser Fraunhofter diffraction analyzer. Solvent transport efficiencies and transport rates, detection limits, and short- and long-term stabilities were evaluated for the CCN with a modified cyclonic spray chamber at different sample uptake rates. The Mg II (280.2nm)/l\/lg 1(285.2nm) ratio was used for matrix effect studies. Results were compared to those with conventional nebulizers, a cross-flow nebulizer with a Scott-type spray chamber, a GemCone nebulizer with a cyclonic spray chamber, and a Meinhard TR-30-K3 concentric nebulizer with a cyclonic spray chamber. Transport efficiencies of up to 57% were obtained for the CCN. For the elements tested, short- and long-term precisions and detection limits obtained with the CCN at 0.050-0.500 ml min'^ are similar to, or better than, those obtained on the same instrument using the conventional nebulizers (at 1.0 ml min'^). The depressive and enhancement effects of easily ionizable element Na, sulfuric acid, and dodecylamine surfactant on analyte signals with the CCN are similar to, or better than, those obtained with the conventional nebulizers. However, capillary clog was observed when the sample solution with high dissolved solids was nebulized for more than 40 min. The effects of data acquisition and data processing on detection limits were studied using inductively coupled plasma-atomic emission spectrometry. The study examined the effects of different detection limit approaches, the effects of data integration modes, the effects of regression modes, the effects of the standard concentration range and the number of standards, the effects of sample uptake rate, and the effect of Integration time. All the experiments followed the same protocols. Three detection limit approaches were examined, lUPAC method, the residual standard deviation (RSD), and the signal-to-background ratio and relative standard deviation of the background (SBR-RSDB). The study demonstrated that the different approaches, the integration modes, the regression methods, and the sample uptake rates can have an effect on detection limits. The study also showed that the different approaches give different detection limits and some methods (for example, RSD) are susceptible to the quality of calibration curves. Multicomponents spectral fitting (MSF) gave the best results among these three integration modes, peak height, peak area, and MSF. Weighted least squares method showed the ability to obtain better quality calibration curves. Although an effect of the number of standards on detection limits was not observed, multiple standards are recommended because they provide more reliable calibration curves. An increase of sample uptake rate and integration time could improve detection limits. However, an improvement with increased integration time on detection limits was not observed because the auto integration mode was used.
Resumo:
RÉSUMÉ - Les images satellitales multispectrales, notamment celles à haute résolution spatiale (plus fine que 30 m au sol), représentent une source d’information inestimable pour la prise de décision dans divers domaines liés à la gestion des ressources naturelles, à la préservation de l’environnement ou à l’aménagement et la gestion des centres urbains. Les échelles d’étude peuvent aller du local (résolutions plus fines que 5 m) à des échelles régionales (résolutions plus grossières que 5 m). Ces images caractérisent la variation de la réflectance des objets dans le spectre qui est l’information clé pour un grand nombre d’applications de ces données. Or, les mesures des capteurs satellitaux sont aussi affectées par des facteurs « parasites » liés aux conditions d’éclairement et d’observation, à l’atmosphère, à la topographie et aux propriétés des capteurs. Deux questions nous ont préoccupé dans cette recherche. Quelle est la meilleure approche pour restituer les réflectances au sol à partir des valeurs numériques enregistrées par les capteurs tenant compte des ces facteurs parasites ? Cette restitution est-elle la condition sine qua non pour extraire une information fiable des images en fonction des problématiques propres aux différents domaines d’application des images (cartographie du territoire, monitoring de l’environnement, suivi des changements du paysage, inventaires des ressources, etc.) ? Les recherches effectuées les 30 dernières années ont abouti à une série de techniques de correction des données des effets des facteurs parasites dont certaines permettent de restituer les réflectances au sol. Plusieurs questions sont cependant encore en suspens et d’autres nécessitent des approfondissements afin, d’une part d’améliorer la précision des résultats et d’autre part, de rendre ces techniques plus versatiles en les adaptant à un plus large éventail de conditions d’acquisition des données. Nous pouvons en mentionner quelques unes : - Comment prendre en compte des caractéristiques atmosphériques (notamment des particules d’aérosol) adaptées à des conditions locales et régionales et ne pas se fier à des modèles par défaut qui indiquent des tendances spatiotemporelles à long terme mais s’ajustent mal à des observations instantanées et restreintes spatialement ? - Comment tenir compte des effets de « contamination » du signal provenant de l’objet visé par le capteur par les signaux provenant des objets environnant (effet d’adjacence) ? ce phénomène devient très important pour des images de résolution plus fine que 5 m; - Quels sont les effets des angles de visée des capteurs hors nadir qui sont de plus en plus présents puisqu’ils offrent une meilleure résolution temporelle et la possibilité d’obtenir des couples d’images stéréoscopiques ? - Comment augmenter l’efficacité des techniques de traitement et d’analyse automatique des images multispectrales à des terrains accidentés et montagneux tenant compte des effets multiples du relief topographique sur le signal capté à distance ? D’autre part, malgré les nombreuses démonstrations par des chercheurs que l’information extraite des images satellitales peut être altérée à cause des tous ces facteurs parasites, force est de constater aujourd’hui que les corrections radiométriques demeurent peu utilisées sur une base routinière tel qu’est le cas pour les corrections géométriques. Pour ces dernières, les logiciels commerciaux de télédétection possèdent des algorithmes versatiles, puissants et à la portée des utilisateurs. Les algorithmes des corrections radiométriques, lorsqu’ils sont proposés, demeurent des boîtes noires peu flexibles nécessitant la plupart de temps des utilisateurs experts en la matière. Les objectifs que nous nous sommes fixés dans cette recherche sont les suivants : 1) Développer un logiciel de restitution des réflectances au sol tenant compte des questions posées ci-haut. Ce logiciel devait être suffisamment modulaire pour pouvoir le bonifier, l’améliorer et l’adapter à diverses problématiques d’application d’images satellitales; et 2) Appliquer ce logiciel dans différents contextes (urbain, agricole, forestier) et analyser les résultats obtenus afin d’évaluer le gain en précision de l’information extraite par des images satellitales transformées en images des réflectances au sol et par conséquent la nécessité d’opérer ainsi peu importe la problématique de l’application. Ainsi, à travers cette recherche, nous avons réalisé un outil de restitution de la réflectance au sol (la nouvelle version du logiciel REFLECT). Ce logiciel est basé sur la formulation (et les routines) du code 6S (Seconde Simulation du Signal Satellitaire dans le Spectre Solaire) et sur la méthode des cibles obscures pour l’estimation de l’épaisseur optique des aérosols (aerosol optical depth, AOD), qui est le facteur le plus difficile à corriger. Des améliorations substantielles ont été apportées aux modèles existants. Ces améliorations concernent essentiellement les propriétés des aérosols (intégration d’un modèle plus récent, amélioration de la recherche des cibles obscures pour l’estimation de l’AOD), la prise en compte de l’effet d’adjacence à l’aide d’un modèle de réflexion spéculaire, la prise en compte de la majorité des capteurs multispectraux à haute résolution (Landsat TM et ETM+, tous les HR de SPOT 1 à 5, EO-1 ALI et ASTER) et à très haute résolution (QuickBird et Ikonos) utilisés actuellement et la correction des effets topographiques l’aide d’un modèle qui sépare les composantes directe et diffuse du rayonnement solaire et qui s’adapte également à la canopée forestière. Les travaux de validation ont montré que la restitution de la réflectance au sol par REFLECT se fait avec une précision de l’ordre de ±0.01 unités de réflectance (pour les bandes spectrales du visible, PIR et MIR), même dans le cas d’une surface à topographie variable. Ce logiciel a permis de montrer, à travers des simulations de réflectances apparentes à quel point les facteurs parasites influant les valeurs numériques des images pouvaient modifier le signal utile qui est la réflectance au sol (erreurs de 10 à plus de 50%). REFLECT a également été utilisé pour voir l’importance de l’utilisation des réflectances au sol plutôt que les valeurs numériques brutes pour diverses applications courantes de la télédétection dans les domaines des classifications, du suivi des changements, de l’agriculture et de la foresterie. Dans la majorité des applications (suivi des changements par images multi-dates, utilisation d’indices de végétation, estimation de paramètres biophysiques, …), la correction des images est une opération cruciale pour obtenir des résultats fiables. D’un point de vue informatique, le logiciel REFLECT se présente comme une série de menus simples d’utilisation correspondant aux différentes étapes de saisie des intrants de la scène, calcul des transmittances gazeuses, estimation de l’AOD par la méthode des cibles obscures et enfin, l’application des corrections radiométriques à l’image, notamment par l’option rapide qui permet de traiter une image de 5000 par 5000 pixels en 15 minutes environ. Cette recherche ouvre une série de pistes pour d’autres améliorations des modèles et méthodes liés au domaine des corrections radiométriques, notamment en ce qui concerne l’intégration de la FDRB (fonction de distribution de la réflectance bidirectionnelle) dans la formulation, la prise en compte des nuages translucides à l’aide de la modélisation de la diffusion non sélective et l’automatisation de la méthode des pentes équivalentes proposée pour les corrections topographiques.
Resumo:
L’atmosphère terrestre est très riche en azote (N2). Mais cet azote diatomique est sous une forme très stable, inutilisable par la majorité des êtres vivants malgré qu’il soit indispensable pour la synthèse de matériels organiques. Seuls les procaryotes diazotrophiques sont capables de vivre avec le N2 comme source d’azote. La fixation d’azote est un processus qui permet de produire des substances aminées à partir de l’azote gazeux présent dans l’atmosphère (78%). Cependant, ce processus est très complexe et nécessite la biosynthèse d’une vingtaine de protéines et la consommation de beaucoup d’énergie (16 molécules d’ATP par mole de N2 fixé). C’est la raison pour laquelle ce phénomène est rigoureusement régulé. Les bactéries photosynthétiques pourpres non-sulfureuses sont connues pour leur capacité de faire la fixation de l’azote. Les études faites à la lumière, dans le mode de croissance préféré de ces bactéries (photosynthèse anaérobie), ont montré que la nitrogénase (enzyme responsable de la fixation du diazote) est sujet d’une régulation à trois niveaux: une régulation transcriptionnelle de NifA (protéine activatrice de la transcription des gènes nif), une régulation post-traductionnelle de l’activité de NifA envers l’activation de la transcription des autres gènes nif, et la régulation post-traductionnelle de l’activité de la nitrogénase quand les cellules sont soumises à un choc d’ammoniaque. Le système de régulation déjà décrit fait intervenir essentiellement une protéine membranaire, AmtB, et les deux protéines PII, GlnB et GlnK. Il est connu depuis long temps que la nitrogénase est aussi régulée quand une culture photosynthétique est exposée à la noirceur, mais jusqu’aujourd’hui, on ignore encore la nature des systèmes intervenants dans cette régulation. Ainsi, parmi les questions qui peuvent se poser: quelles sont les protéines qui interviennent dans l’inactivation de la nitrogénase lorsqu’une culture anaérobie est placée à la noirceur? Une analyse de plusieurs souches mutantes, amtB- , glnK- , glnB- et amtY- poussées dans différentes conditions de limitation en azote, serait une façon pour répondre à ces interrogations. Alors, avec le suivi de l’activité de la nitrogénase et le Western Blot, on a montré que le choc de noirceur provoquerait un "Switch-off" de l’activité de la nitrogénase dû à une ADP-ribosylation de la protéine Fe. On a réussit aussi à montrer que ii tout le système déjà impliqué dans la réponse à un choc d’ammoniaque, est également nécessaire pour une réponse à un manque de lumière ou d’énergie (les protéines AmtB, GlnK, GlnB, DraG, DraT et AmtY). Or, Rhodobacter capsulatus est capable de fixer l’azote et de croitre aussi bien dans la micro-aérobie à la noirceur que dans des conditions de photosynthèse anaérobies, mais jusqu'à maintenant sa régulation dans l’obscurité est peu étudiée. L’étude de la fixation d’azote à la noirceur nous a permis de montrer que le complexe membranaire Rnf n’est pas nécessaire à la croissance de R. capsulatus dans de telles conditions. Dans le but de développer une façon d’étudier la régulation de la croissance dans ce mode, on a tout d’abord essayé d’identifier les conditions opératoires (O2, [NH4 + ]) permettant à R. capsulatus de fixer l’azote en microaérobie. L’optimisation de cette croissance a montré que la concentration optimale d’oxygène nécessaire est de 10% mélangé avec de l’azote.
Resumo:
Atmospheric surface boundary layer parameters vary anomalously in response to the occurrence of annular solar eclipse on 15th January 2010 over Cochin. It was the longest annular solar eclipse occurred over South India with high intensity. As it occurred during the noon hours, it is considered to be much more significant because of its effects in all the regions of atmosphere including ionosphere. Since the insolation is the main driving factor responsible for the anomalous changes occurred in the surface layer due to annular solar eclipse, occurred on 15th January 2010, that played very important role in understanding dynamics of the atmosphere during the eclipse period because of its coincidence with the noon time. The Sonic anemometer is able to give data of zonal, meridional and vertical wind as well as the air temperature at a temporal resolution of 1 s. Different surface boundary layer parameters and turbulent fluxes were computed by the application of eddy correlation technique using the high resolution station data. The surface boundary layer parameters that are computed using the sonic anemometer data during the period are momentum flux, sensible heat flux, turbulent kinetic energy, frictional velocity (u*), variance of temperature, variances of u, v and w wind. In order to compare the results, a control run has been done using the data of previous day as well as next day. It is noted that over the specified time period of annular solar eclipse, all the above stated surface boundary layer parameters vary anomalously when compared with the control run. From the observations we could note that momentum flux was 0.1 Nm 2 instead of the mean value 0.2 Nm-2 when there was eclipse. Sensible heat flux anomalously decreases to 50 Nm 2 instead of the mean value 200 Nm 2 at the time of solar eclipse. The turbulent kinetic energy decreases to 0.2 m2s 2 from the mean value 1 m2s 2. The frictional velocity value decreases to 0.05 ms 1 instead of the mean value 0.2 ms 1. The present study aimed at understanding the dynamics of surface layer in response to the annular solar eclipse over a tropical coastal station, occurred during the noon hours. Key words: annular solar eclipse, surface boundary layer, sonic anemometer
Resumo:
To study the complex formation of group 5 elements (Nb, Ta, Ha, and pseudoanalog Pa) in aqueous HCI solutions of medium and high concentrations the electronic structures of anionic complexes of these elements [MCl_6]^-, [MOCl_4]^-, [M(OH)-2 Cl_4]^-, and [MOCl_5]^2- have been calculated using the relativistic Dirac-Slater Discrete-Variational Method. The charge density distribution analysis has shown that tantalum occupies a specific position in the group and has the highest tendency to form the pure halide complex, [TaCl_6-. This fact along with a high covalency of this complex explains its good extractability into aliphatic amines. Niobium has equal trends to form pure halide [NbCl_6]^- and oxyhalide [NbOCl_5]^2- species at medium and high acid concentrations. Protactinium has a slight preference for the [PaOCl_5]^2- form or for the pure halide complexes with coordination number higher than 6 under these conditions. Element 105 at high HCl concentrations will have a preference to form oxyhalide anionic complex [HaOCl_5]^2- rather than [HaCl_6]^-. For the same sort of anionic oxychloride complexes an estimate has been done of their partition between the organic and aqueous phases in the extraction by aliphatic amines, which shows the following succession of the partition coefficients: P_Nb < P_Ha < P_Pa.
Resumo:
The 21st century has brought new challenges for forest management at a time when globalization in world trade is increasing and global climate change is becoming increasingly apparent. In addition to various goods and services like food, feed, timber or biofuels being provided to humans, forest ecosystems are a large store of terrestrial carbon and account for a major part of the carbon exchange between the atmosphere and the land surface. Depending on the stage of the ecosystems and/or management regimes, forests can be either sinks, or sources of carbon. At the global scale, rapid economic development and a growing world population have raised much concern over the use of natural resources, especially forest resources. The challenging question is how can the global demands for forest commodities be satisfied in an increasingly globalised economy, and where could they potentially be produced? For this purpose, wood demand estimates need to be integrated in a framework, which is able to adequately handle the competition for land between major land-use options such as residential land or agricultural land. This thesis is organised in accordance with the requirements to integrate the simulation of forest changes based on wood extraction in an existing framework for global land-use modelling called LandSHIFT. Accordingly, the following neuralgic points for research have been identified: (1) a review of existing global-scale economic forest sector models (2) simulation of global wood production under selected scenarios (3) simulation of global vegetation carbon yields and (4) the implementation of a land-use allocation procedure to simulate the impact of wood extraction on forest land-cover. Modelling the spatial dynamics of forests on the global scale requires two important inputs: (1) simulated long-term wood demand data to determine future roundwood harvests in each country and (2) the changes in the spatial distribution of woody biomass stocks to determine how much of the resource is available to satisfy the simulated wood demands. First, three global timber market models are reviewed and compared in order to select a suitable economic model to generate wood demand scenario data for the forest sector in LandSHIFT. The comparison indicates that the ‘Global Forest Products Model’ (GFPM) is most suitable for obtaining projections on future roundwood harvests for further study with the LandSHIFT forest sector. Accordingly, the GFPM is adapted and applied to simulate wood demands for the global forestry sector conditional on selected scenarios from the Millennium Ecosystem Assessment and the Global Environmental Outlook until 2050. Secondly, the Lund-Potsdam-Jena (LPJ) dynamic global vegetation model is utilized to simulate the change in potential vegetation carbon stocks for the forested locations in LandSHIFT. The LPJ data is used in collaboration with spatially explicit forest inventory data on aboveground biomass to allocate the demands for raw forest products and identify locations of deforestation. Using the previous results as an input, a methodology to simulate the spatial dynamics of forests based on wood extraction is developed within the LandSHIFT framework. The land-use allocation procedure specified in the module translates the country level demands for forest products into woody biomass requirements for forest areas, and allocates these on a five arc minute grid. In a first version, the model assumes only actual conditions through the entire study period and does not explicitly address forest age structure. Although the module is in a very preliminary stage of development, it already captures the effects of important drivers of land-use change like cropland and urban expansion. As a first plausibility test, the module performance is tested under three forest management scenarios. The module succeeds in responding to changing inputs in an expected and consistent manner. The entire methodology is applied in an exemplary scenario analysis for India. A couple of future research priorities need to be addressed, particularly the incorporation of plantation establishments; issue of age structure dynamics; as well as the implementation of a new technology change factor in the GFPM which can allow the specification of substituting raw wood products (especially fuelwood) by other non-wood products.
Resumo:
The convective-diffusive transport of sub-micron aerosols in an oscillatory laminar flow within a 2-D single bifurcation is studied, using order-of-magnitude analysis and numerical simulation using a commercial software (FEMLAB®). Based on the similarity between momentum and mass transfer equations, various transient mass transport regimes are classified and scaled according to Strouhal and beta numbers. Results show that the mass transfer rate is highest at the carinal ridge and there is a phase-shift in diffusive transport time if the beta number is greater than one. It is also shown that diffusive mass transfer becomes independent of the oscillating outer flow if the Strouhal number is greater than one.
Resumo:
The long-term stability, high accuracy, all-weather capability, high vertical resolution, and global coverage of Global Navigation Satellite System (GNSS) radio occultation (RO) suggests it as a promising tool for global monitoring of atmospheric temperature change. With the aim to investigate and quantify how well a GNSS RO observing system is able to detect climate trends, we are currently performing an (climate) observing system simulation experiment over the 25-year period 2001 to 2025, which involves quasi-realistic modeling of the neutral atmosphere and the ionosphere. We carried out two climate simulations with the general circulation model MAECHAM5 (Middle Atmosphere European Centre/Hamburg Model Version 5) of the MPI-M Hamburg, covering the period 2001–2025: One control run with natural variability only and one run also including anthropogenic forcings due to greenhouse gases, sulfate aerosols, and tropospheric ozone. On the basis of this, we perform quasi-realistic simulations of RO observables for a small GNSS receiver constellation (six satellites), state-of-the-art data processing for atmospheric profiles retrieval, and a statistical analysis of temperature trends in both the “observed” climatology and the “true” climatology. Here we describe the setup of the experiment and results from a test bed study conducted to obtain a basic set of realistic estimates of observational errors (instrument- and retrieval processing-related errors) and sampling errors (due to spatial-temporal undersampling). The test bed results, obtained for a typical summer season and compared to the climatic 2001–2025 trends from the MAECHAM5 simulation including anthropogenic forcing, were found encouraging for performing the full 25-year experiment. They indicated that observational and sampling errors (both contributing about 0.2 K) are consistent with recent estimates of these errors from real RO data and that they should be sufficiently small for monitoring expected temperature trends in the global atmosphere over the next 10 to 20 years in most regions of the upper troposphere and lower stratosphere (UTLS). Inspection of the MAECHAM5 trends in different RO-accessible atmospheric parameters (microwave refractivity and pressure/geopotential height in addition to temperature) indicates complementary climate change sensitivity in different regions of the UTLS so that optimized climate monitoring shall combine information from all climatic key variables retrievable from GNSS RO data.
Resumo:
Structure is an important physical feature of the soil that is associated with water movement, the soil atmosphere, microorganism activity and nutrient uptake. A soil without any obvious organisation of its components is known as apedal and this state can have marked effects on several soil processes. Accurate maps of topsoil and subsoil structure are desirable for a wide range of models that aim to predict erosion, solute transport, or flow of water through the soil. Also such maps would be useful to precision farmers when deciding how to apply nutrients and pesticides in a site-specific way, and to target subsoiling and soil structure stabilization procedures. Typically, soil structure is inferred from bulk density or penetrometer resistance measurements and more recently from soil resistivity and conductivity surveys. To measure the former is both time-consuming and costly, whereas observations made by the latter methods can be made automatically and swiftly using a vehicle-mounted penetrometer or resistivity and conductivity sensors. The results of each of these methods, however, are affected by other soil properties, in particular moisture content at the time of sampling, texture, and the presence of stones. Traditional methods of observing soil structure identify the type of ped and its degree of development. Methods of ranking such observations from good to poor for different soil textures have been developed. Indicator variograms can be computed for each category or rank of structure and these can be summed to give the sum of indicator variograms (SIV). Observations of the topsoil and subsoil structure were made at four field sites where the soil had developed on different parent materials. The observations were ranked by four methods and indicator and the sum of indicator variograms were computed and modelled for each method of ranking. The individual indicators were then kriged with the parameters of the appropriate indicator variogram model to map the probability of encountering soil with the structure represented by that indicator. The model parameters of the SIVs for each ranking system were used with the data to krige the soil structure classes, and the results are compared with those for the individual indicators. The relations between maps of soil structure and selected wavebands from aerial photographs are examined as basis for planning surveys of soil structure. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The performance of the atmospheric component of the new Hadley Centre Global Environmental Model (HadGEM1) is assessed in terms of its ability to represent a selection of key aspects of variability in the Tropics and extratropics. These include midlatitude storm tracks and blocking activity, synoptic variability over Europe, and the North Atlantic Oscillation together with tropical convection, the Madden-Julian oscillation, and the Asian summer monsoon. Comparisons with the previous model, the Third Hadley Centre Coupled Ocean-Atmosphere GCM (HadCM3), demonstrate that there has been a considerable increase in the transient eddy kinetic energy (EKE), bringing HadGEM1 into closer agreement with current reanalyses. This increase in EKE results from the increased horizontal resolution and, in combination with the improved physical parameterizations, leads to improvements in the representation of Northern Hemisphere storm tracks and blocking. The simulation of synoptic weather regimes over Europe is also greatly improved compared to HadCM3, again due to both increased resolution and other model developments. The variability of convection in the equatorial region is generally stronger and closer to observations than in HadCM3. There is, however, still limited convective variance coincident with several of the observed equatorial wave modes. Simulation of the Madden-Julian oscillation is improved in HadGEM1: both the activity and interannual variability are increased and the eastward propagation, although slower than observed, is much better simulated. While some aspects of the climatology of the Asian summer monsoon are improved in HadGEM1, the upper-level winds are too weak and the simulation of precipitation deteriorates. The dominant modes of monsoon interannual variability are similar in the two models, although in HadCM3 this is linked to SST forcing, while in HadGEM1 internal variability dominates. Overall, analysis of the phenomena considered here indicates that HadGEM1 performs well and, in many important respects, improves upon HadCM3. Together with the improved representation of the mean climate, this improvement in the simulation of atmospheric variability suggests that HadGEM1 provides a sound basis for future studies of climate and climate change.