923 resultados para Reliability in refrigeration systems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Soil C-CO2 emissions are sensitive indicators of management system impacts on soil organic matter (SOM). The main soil C-CO2 sources at the soil-plant interface are the decomposition of crop residues, SOM turnover, and respiration of roots and soil biota. The objectives of this study were to evaluate the impacts of tillage and cropping systems on long-term soil C-CO2 emissions and their relationship with carbon (C) mineralization of crop residues. A long-term experiment was conducted in a Red Oxisol in Cruz Alta, RS, Brazil, with subtropical climate Cfa (Köppen classification), mean annual precipitation of 1,774 mm and mean annual temperature of 19.2 ºC. Treatments consisted of two tillage systems: (a) conventional tillage (CT) and (b) no tillage (NT) in combination with three cropping systems: (a) R0- monoculture system (soybean/wheat), (b) R1- winter crop rotation (soybean/wheat/soybean/black oat), and (c) R2- intensive crop rotation (soybean/ black oat/soybean/black oat + common vetch/maize/oilseed radish/wheat). The soil C-CO2 efflux was measured every 14 days for two years (48 measurements), by trapping the CO2 in an alkaline solution. The soil gravimetric moisture in the 0-0.05 m layer was determined concomitantly with the C-CO2 efflux measurements. The crop residue C mineralization was evaluated with the mesh-bag method, with sampling 14, 28, 56, 84, 112, and 140 days after the beginning of the evaluation period for C measurements. Four C conservation indexes were used to assess the relation between C-CO2 efflux and soil C stock and its compartments. The crop residue C mineralization fit an exponential model in time. For black oat, wheat and maize residues, C mineralization was higher in CT than NT, while for soybean it was similar. Soil moisture was higher in NT than CT, mainly in the second year of evaluation. There was no difference in tillage systems for annual average C-CO2 emissions, but in some individual evaluations, differences between tillage systems were noticed for C-CO2 evolution. Soil C-CO2 effluxes followed a bi-modal pattern, with peaks in October/November and February/March. The highest emission was recorded in the summer and the lowest in the winter. The C-CO2 effluxes were weakly correlated to air temperature and not correlated to soil moisture. Based on the soil C conservation indexes investigated, NT associated to intensive crop rotation was more C conserving than CT with monoculture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The introduction and intensification of no-tillage systems in Brazilian agriculture in recent decades have created a new scenario, increasing concerns about soil physical properties. The objective of this study was to assess the effects of different tillage systems on some physical properties of an Ultisol previously under native grassland. Five tillage methods were tested: no-tillage (NT), chiseling (Ch), no-tillage with chiseling every two years (NTCh2), chiseling using an equipment with a clod-breaking roller (ChR) and chiseling followed by disking (ChD). The bulk density, macroporosity, microporosity and total porosity, mechanical resistance to penetration, water infiltration into the soil and crop yields were evaluated. The values of soil bulk density, mechanical resistance to penetration and microporosity increased as macroporosity decreased. Soil bulk density was lower in tillage systems with higher levels of tillage/soil mobilization; highest values were observed in NT and the lowest in the ChD system. The water infiltration rate was highest in the ChR system, followed by the systems ChD, NT and NTCh2, while crop yields were higher in systems with less soil mobilization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In agricultural systems the N-NH4+ and N-NO3- contents is significantly affected by soil management. This study investigated the dynamics of inorganic nitrogen (N; NH4+ and NO3-) in an experimental evaluation of soil management systems (SMSs) adopted in 1988 at the experimental station of the ABC Foundation in Ponta Grossa, in the Central South region of the State of Paraná. The objective of this study was to evaluate the changes in N-NH4+ and N-NO3- flux in the surface layer of a Red Latosol arising from SMSs over a 12-month period. The experiment was arranged in a completely randomized block design in split plots, in three replications. The plots consisted of the following SMSs: 1) conventional tillage (CT); 2) minimum tillage (MT); 3) no-tillage with chisel plow every three years (NT CH); and 4) continuous no-tillage (CNT). To evaluate the dynamics of inorganic N, the subplots represented samplings (11 sampling times, T1 - T11). The ammonium N (N-NH4+) and nitric N (N-NO3-) contents were higher in systems with reduced tillage (MT and NT CH) and without tillage (CNT) than in the CT system. In the period from October 2003 to February 2004, the N-NH4+ was higher than the N-NO3- soil content. Conversely, in the period from May 2004 to July 2004, the N-NO3- was higher than the N-NH4+ content. The greatest fluctuation in the N-NH4+ and N-NO3- contents occurred in the 0-2.5 cm layer, and the highest peak in the N-NH4+ and N-NO3- concentrations occurred after the surface application of N. Both N-NH4+ and N-NO3- were strongly correlated with the soil organic C content, which indicated that these properties vary together in the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Considering nitrogen mineralization (N) of soil organic matter is a key aspect for the efficient management of N fertilizers in agricultural systems. Long-term aerobic incubation is the standard technique for calibrating the chemical extraction methods used to estimate the potentially mineralizable N in soil. However, the technique is laborious, expensive and time-consuming. In this context, the aims of this study were to determine the amount of soil mineralizable N in the 0-60 cm layer and to evaluate the use of short-term anaerobic incubation instead of long-term aerobic incubation for the estimation of net N mineralization rates in soils under sugarcane. Five soils from areas without previous N fertilization were used in the layers 0-20, 20-40 and 40-60 cm. Soil samples were aerobically incubated at 35 ºC for 32 weeks or anaerobically incubated (waterlogged) at 40 ºC for seven days to determine the net soil N mineralization. The sand, silt and clay contents were highly correlated with the indexes used for predicting mineralizable N. The 0-40 cm layer was the best sampling depth for the estimation of soil mineralizable N, while in the 40-60 cm layer net N mineralization was low in both incubation procedures. Anaerobic incubation provided reliable estimates of mineralizable N in the soil that correlated well with the indexes obtained using aerobic incubation. The inclusion of the pre-existing NH4+-N content improved the reliability of the estimate of mineralizable N obtained using anaerobic incubation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ces dernières années, de nombreuses recherches ont mis en évidence les effets toxiques des micropolluants organiques pour les espèces de nos lacs et rivières. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, alors que les organismes sont exposés tous les jours à des milliers de substances en mélange. Or les effets de ces cocktails ne sont pas négligeables. Cette thèse de doctorat s'est ainsi intéressée aux modèles permettant de prédire le risque environnemental de ces cocktails pour le milieu aquatique. Le principal objectif a été d'évaluer le risque écologique des mélanges de substances chimiques mesurées dans le Léman, mais aussi d'apporter un regard critique sur les méthodologies utilisées afin de proposer certaines adaptations pour une meilleure estimation du risque. Dans la première partie de ce travail, le risque des mélanges de pesticides et médicaments pour le Rhône et pour le Léman a été établi en utilisant des approches envisagées notamment dans la législation européenne. Il s'agit d'approches de « screening », c'est-à-dire permettant une évaluation générale du risque des mélanges. Une telle approche permet de mettre en évidence les substances les plus problématiques, c'est-à-dire contribuant le plus à la toxicité du mélange. Dans notre cas, il s'agit essentiellement de 4 pesticides. L'étude met également en évidence que toutes les substances, même en trace infime, contribuent à l'effet du mélange. Cette constatation a des implications en terme de gestion de l'environnement. En effet, ceci implique qu'il faut réduire toutes les sources de polluants, et pas seulement les plus problématiques. Mais l'approche proposée présente également un biais important au niveau conceptuel, ce qui rend son utilisation discutable, en dehors d'un screening, et nécessiterait une adaptation au niveau des facteurs de sécurité employés. Dans une deuxième partie, l'étude s'est portée sur l'utilisation des modèles de mélanges dans le calcul de risque environnemental. En effet, les modèles de mélanges ont été développés et validés espèce par espèce, et non pour une évaluation sur l'écosystème en entier. Leur utilisation devrait donc passer par un calcul par espèce, ce qui est rarement fait dû au manque de données écotoxicologiques à disposition. Le but a été donc de comparer, avec des valeurs générées aléatoirement, le calcul de risque effectué selon une méthode rigoureuse, espèce par espèce, avec celui effectué classiquement où les modèles sont appliqués sur l'ensemble de la communauté sans tenir compte des variations inter-espèces. Les résultats sont dans la majorité des cas similaires, ce qui valide l'approche utilisée traditionnellement. En revanche, ce travail a permis de déterminer certains cas où l'application classique peut conduire à une sous- ou sur-estimation du risque. Enfin, une dernière partie de cette thèse s'est intéressée à l'influence que les cocktails de micropolluants ont pu avoir sur les communautés in situ. Pour ce faire, une approche en deux temps a été adoptée. Tout d'abord la toxicité de quatorze herbicides détectés dans le Léman a été déterminée. Sur la période étudiée, de 2004 à 2009, cette toxicité due aux herbicides a diminué, passant de 4% d'espèces affectées à moins de 1%. Ensuite, la question était de savoir si cette diminution de toxicité avait un impact sur le développement de certaines espèces au sein de la communauté des algues. Pour ce faire, l'utilisation statistique a permis d'isoler d'autres facteurs pouvant avoir une influence sur la flore, comme la température de l'eau ou la présence de phosphates, et ainsi de constater quelles espèces se sont révélées avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps. Fait intéressant, une partie d'entre-elles avait déjà montré des comportements similaires dans des études en mésocosmes. En conclusion, ce travail montre qu'il existe des modèles robustes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques, et qu'ils peuvent être utilisés pour expliquer le rôle des substances dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application. - Depuis plusieurs années, les risques que posent les micropolluants organiques pour le milieu aquatique préoccupent grandement les scientifiques ainsi que notre société. En effet, de nombreuses recherches ont mis en évidence les effets toxiques que peuvent avoir ces substances chimiques sur les espèces de nos lacs et rivières, quand elles se retrouvent exposées à des concentrations aiguës ou chroniques. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, c'est à dire considérées séparément. Actuellement, il en est de même dans les procédures de régulation européennes, concernant la partie évaluation du risque pour l'environnement d'une substance. Or, les organismes sont exposés tous les jours à des milliers de substances en mélange, et les effets de ces "cocktails" ne sont pas négligeables. L'évaluation du risque écologique que pose ces mélanges de substances doit donc être abordé par de la manière la plus appropriée et la plus fiable possible. Dans la première partie de cette thèse, nous nous sommes intéressés aux méthodes actuellement envisagées à être intégrées dans les législations européennes pour l'évaluation du risque des mélanges pour le milieu aquatique. Ces méthodes sont basées sur le modèle d'addition des concentrations, avec l'utilisation des valeurs de concentrations des substances estimées sans effet dans le milieu (PNEC), ou à partir des valeurs des concentrations d'effet (CE50) sur certaines espèces d'un niveau trophique avec la prise en compte de facteurs de sécurité. Nous avons appliqué ces méthodes à deux cas spécifiques, le lac Léman et le Rhône situés en Suisse, et discuté les résultats de ces applications. Ces premières étapes d'évaluation ont montré que le risque des mélanges pour ces cas d'étude atteint rapidement une valeur au dessus d'un seuil critique. Cette valeur atteinte est généralement due à deux ou trois substances principales. Les procédures proposées permettent donc d'identifier les substances les plus problématiques pour lesquelles des mesures de gestion, telles que la réduction de leur entrée dans le milieu aquatique, devraient être envisagées. Cependant, nous avons également constaté que le niveau de risque associé à ces mélanges de substances n'est pas négligeable, même sans tenir compte de ces substances principales. En effet, l'accumulation des substances, même en traces infimes, atteint un seuil critique, ce qui devient plus difficile en terme de gestion du risque. En outre, nous avons souligné un manque de fiabilité dans ces procédures, qui peuvent conduire à des résultats contradictoires en terme de risque. Ceci est lié à l'incompatibilité des facteurs de sécurité utilisés dans les différentes méthodes. Dans la deuxième partie de la thèse, nous avons étudié la fiabilité de méthodes plus avancées dans la prédiction de l'effet des mélanges pour les communautés évoluant dans le système aquatique. Ces méthodes reposent sur le modèle d'addition des concentrations (CA) ou d'addition des réponses (RA) appliqués sur les courbes de distribution de la sensibilité des espèces (SSD) aux substances. En effet, les modèles de mélanges ont été développés et validés pour être appliqués espèce par espèce, et non pas sur plusieurs espèces agrégées simultanément dans les courbes SSD. Nous avons ainsi proposé une procédure plus rigoureuse, pour l'évaluation du risque d'un mélange, qui serait d'appliquer d'abord les modèles CA ou RA à chaque espèce séparément, et, dans une deuxième étape, combiner les résultats afin d'établir une courbe SSD du mélange. Malheureusement, cette méthode n'est pas applicable dans la plupart des cas, car elle nécessite trop de données généralement indisponibles. Par conséquent, nous avons comparé, avec des valeurs générées aléatoirement, le calcul de risque effectué selon cette méthode plus rigoureuse, avec celle effectuée traditionnellement, afin de caractériser la robustesse de cette approche qui consiste à appliquer les modèles de mélange sur les courbes SSD. Nos résultats ont montré que l'utilisation de CA directement sur les SSDs peut conduire à une sous-estimation de la concentration du mélange affectant 5 % ou 50% des espèces, en particulier lorsque les substances présentent un grand écart- type dans leur distribution de la sensibilité des espèces. L'application du modèle RA peut quant à lui conduire à une sur- ou sous-estimations, principalement en fonction de la pente des courbes dose- réponse de chaque espèce composant les SSDs. La sous-estimation avec RA devient potentiellement importante lorsque le rapport entre la EC50 et la EC10 de la courbe dose-réponse des espèces est plus petit que 100. Toutefois, la plupart des substances, selon des cas réels, présentent des données d' écotoxicité qui font que le risque du mélange calculé par la méthode des modèles appliqués directement sur les SSDs reste cohérent et surestimerait plutôt légèrement le risque. Ces résultats valident ainsi l'approche utilisée traditionnellement. Néanmoins, il faut garder à l'esprit cette source d'erreur lorsqu'on procède à une évaluation du risque d'un mélange avec cette méthode traditionnelle, en particulier quand les SSD présentent une distribution des données en dehors des limites déterminées dans cette étude. Enfin, dans la dernière partie de cette thèse, nous avons confronté des prédictions de l'effet de mélange avec des changements biologiques observés dans l'environnement. Dans cette étude, nous avons utilisé des données venant d'un suivi à long terme d'un grand lac européen, le lac Léman, ce qui offrait la possibilité d'évaluer dans quelle mesure la prédiction de la toxicité des mélanges d'herbicide expliquait les changements dans la composition de la communauté phytoplanctonique. Ceci à côté d'autres paramètres classiques de limnologie tels que les nutriments. Pour atteindre cet objectif, nous avons déterminé la toxicité des mélanges sur plusieurs années de 14 herbicides régulièrement détectés dans le lac, en utilisant les modèles CA et RA avec les courbes de distribution de la sensibilité des espèces. Un gradient temporel de toxicité décroissant a pu être constaté de 2004 à 2009. Une analyse de redondance et de redondance partielle, a montré que ce gradient explique une partie significative de la variation de la composition de la communauté phytoplanctonique, même après avoir enlevé l'effet de toutes les autres co-variables. De plus, certaines espèces révélées pour avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps, ont montré des comportements similaires dans des études en mésocosmes. On peut en conclure que la toxicité du mélange herbicide est l'un des paramètres clés pour expliquer les changements de phytoplancton dans le lac Léman. En conclusion, il existe diverses méthodes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques et celui-ci peut jouer un rôle dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application, avant d'utiliser leurs résultats pour la gestion des risques environnementaux. - For several years now, the scientists as well as the society is concerned by the aquatic risk organic micropollutants may pose. Indeed, several researches have shown the toxic effects these substances may induce on organisms living in our lakes or rivers, especially when they are exposed to acute or chronic concentrations. However, most of the studies focused on the toxicity of single compounds, i.e. considered individually. The same also goes in the current European regulations concerning the risk assessment procedures for the environment of these substances. But aquatic organisms are typically exposed every day simultaneously to thousands of organic compounds. The toxic effects resulting of these "cocktails" cannot be neglected. The ecological risk assessment of mixtures of such compounds has therefore to be addressed by scientists in the most reliable and appropriate way. In the first part of this thesis, the procedures currently envisioned for the aquatic mixture risk assessment in European legislations are described. These methodologies are based on the mixture model of concentration addition and the use of the predicted no effect concentrations (PNEC) or effect concentrations (EC50) with assessment factors. These principal approaches were applied to two specific case studies, Lake Geneva and the River Rhône in Switzerland, including a discussion of the outcomes of such applications. These first level assessments showed that the mixture risks for these studied cases exceeded rapidly the critical value. This exceeding is generally due to two or three main substances. The proposed procedures allow therefore the identification of the most problematic substances for which management measures, such as a reduction of the entrance to the aquatic environment, should be envisioned. However, it was also showed that the risk levels associated with mixtures of compounds are not negligible, even without considering these main substances. Indeed, it is the sum of the substances that is problematic, which is more challenging in term of risk management. Moreover, a lack of reliability in the procedures was highlighted, which can lead to contradictory results in terms of risk. This result is linked to the inconsistency in the assessment factors applied in the different methods. In the second part of the thesis, the reliability of the more advanced procedures to predict the mixture effect to communities in the aquatic system were investigated. These established methodologies combine the model of concentration addition (CA) or response addition (RA) with species sensitivity distribution curves (SSD). Indeed, the mixture effect predictions were shown to be consistent only when the mixture models are applied on a single species, and not on several species simultaneously aggregated to SSDs. Hence, A more stringent procedure for mixture risk assessment is proposed, that would be to apply first the CA or RA models to each species separately and, in a second step, to combine the results to build an SSD for a mixture. Unfortunately, this methodology is not applicable in most cases, because it requires large data sets usually not available. Therefore, the differences between the two methodologies were studied with datasets created artificially to characterize the robustness of the traditional approach applying models on species sensitivity distribution. The results showed that the use of CA on SSD directly might lead to underestimations of the mixture concentration affecting 5% or 50% of species, especially when substances present a large standard deviation of the distribution from the sensitivity of the species. The application of RA can lead to over- or underestimates, depending mainly on the slope of the dose-response curves of the individual species. The potential underestimation with RA becomes important when the ratio between the EC50 and the EC10 for the dose-response curve of the species composing the SSD are smaller than 100. However, considering common real cases of ecotoxicity data for substances, the mixture risk calculated by the methodology applying mixture models directly on SSDs remains consistent and would rather slightly overestimate the risk. These results can be used as a theoretical validation of the currently applied methodology. Nevertheless, when assessing the risk of mixtures, one has to keep in mind this source of error with this classical methodology, especially when SSDs present a distribution of the data outside the range determined in this study Finally, in the last part of this thesis, we confronted the mixture effect predictions with biological changes observed in the environment. In this study, long-term monitoring of a European great lake, Lake Geneva, provides the opportunity to assess to what extent the predicted toxicity of herbicide mixtures explains the changes in the composition of the phytoplankton community next to other classical limnology parameters such as nutrients. To reach this goal, the gradient of the mixture toxicity of 14 herbicides regularly detected in the lake was calculated, using concentration addition and response addition models. A decreasing temporal gradient of toxicity was observed from 2004 to 2009. Redundancy analysis and partial redundancy analysis showed that this gradient explains a significant portion of the variation in phytoplankton community composition, even when having removed the effect of all other co-variables. Moreover, some species that were revealed to be influenced positively or negatively, by the decrease of toxicity in the lake over time, showed similar behaviors in mesocosms studies. It could be concluded that the herbicide mixture toxicity is one of the key parameters to explain phytoplankton changes in Lake Geneva. To conclude, different methods exist to predict the risk of mixture in the ecosystems. But their reliability varies depending on the underlying hypotheses. One should therefore carefully consider these hypotheses, as well as the limits of the approaches, before using the results for environmental risk management

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We critically discuss relaxation experiments in magnetic systems that can be characterized in terms of an energy barrier distribution, showing that proper normalization of the relaxation data is needed whenever curves corresponding to different temperatures are to be compared. We show how these normalization factors can be obtained from experimental data by using the Tln (t/t0) scaling method without making any assumptions about the nature of the energy barrier distribution. The validity of the procedure is tested using a ferrofluid of Fe3O4 particles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ab initio cluster model approach has been used to study the electronic structure and magnetic coupling of KCuF3 and K2CuF4 in their various ordered polytype crystal forms. Due to a cooperative Jahn-Teller distortion these systems exhibit strong anisotropies. In particular, the magnetic properties strongly differ from those of isomorphic compounds. Hence, KCuF3 is a quasi-one-dimensional (1D) nearest neighbor Heisenberg antiferromagnet whereas K2CuF4 is the only ferromagnet among the K2MF4 series of compounds (M=Mn, Fe, Co, Ni, and Cu) behaving all as quasi-2D nearest neighbor Heisenberg systems. Different ab initio techniques are used to explore the magnetic coupling in these systems. All methods, including unrestricted Hartree-Fock, are able to explain the magnetic ordering. However, quantitative agreement with experiment is reached only when using a state-of-the-art configuration interaction approach. Finally, an analysis of the dependence of the magnetic coupling constant with respect to distortion parameters is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mating systems, that is, whether organisms give rise to progeny by selfing, inbreeding or outcrossing, strongly affect important ecological and evolutionary processes. Large variations in mating systems exist in fungi, allowing the study of their origin and consequences. In fungi, sexual incompatibility is determined by molecular recognition mechanisms, controlled by a single mating-type locus in most unifactorial fungi. In Basidiomycete fungi, however, which include rusts, smuts and mushrooms, a system has evolved in which incompatibility is controlled by two unlinked loci. This bifactorial system probably evolved from a unifactorial system. Multiple independent transitions back to a unifactorial system occurred. It is still unclear what force drove evolution and maintenance of these contrasting inheritance patterns that determine mating compatibility. Here, we give an overview of the evolutionary factors that might have driven the evolution of bifactoriality from a unifactorial system and the transitions back to unifactoriality. Bifactoriality most likely evolved for selfing avoidance. Subsequently, multiallelism at mating-type loci evolved through negative frequency-dependent selection by increasing the chance to find a compatible mate. Unifactoriality then evolved back in some species, possibly because either selfing was favoured or for increasing the chance to find a compatible mate in species with few alleles. Owing to the existence of closely related unifactorial and bifactorial species and the increasing knowledge of the genetic systems of the different mechanisms, Basidiomycetes provide an excellent model for studying the different forces that shape breeding systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this work was to assess the effects of conventional tillage and of different direct seeding mulch-based cropping systems (DMC) on soil nematofauna characteristics. The long-term field experiment was carried out in the highlands of Madagascar on an andic Dystrustept soil. Soil samples were taken once a year during three successive years (14 to 16 years after installation of the treatments) from a 0-5-cm soil layer of a conventional tillage system and of three kinds of DMC: direct seeding on mulch from rotation soybean-maize residues; direct seeding of maize-maize rotation on living mulch of silverleaf (Desmodium uncinatum); direct seeding of bean (Phaseolus vulgaris)-soybean rotation on living mulch of kikuyu grass (Pennisetum clandestinum). The samples were compared with samples from natural fallows. The soil nematofauna, characterized by the abundance of different trophic groups and indices (MI, maturity index; EI and SI, enrichment and structure indices), allowed the discrimination of the different cropping systems. The different DMC treatments had a more complex soil food web than the tillage treatment: SI and MI were significantly greater in DMC systems. Moreover, DMC with dead mulch had a lower density of free-living nematodes than DMC with living mulch, which suggested a lower microbial activity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The chemical and isotopic composition of fumarolic gases emitted from Nisyros Volcano, Greece, and of a single gas sample from Vesuvio, Italy, was investigated in order to determine the origin of methane (CH,) within two subduction-related magmatic-hydrothermal environments. Apparent temperatures derived from carbon isotope partitioning between CH4 and CO2 of around 340degreesC for Nisyros and 470degreesC for Vesuvio correlate well with aquifer temperatures as measured directly and/or inferred from compositional data using the H2O-H-2-CO2-CO-CH4 geothermometer. Thermodynamic modeling reveals chemical equilibrium between CH4, CO2 and H2O implying that carbon isotope partitioning between CO2 and CH, in both systems is controlled by aquifer temperature. N-2/(3) He and CH4/(3) He ratios of Nisyros fumarolic gases are unusually low for subduction zone gases and correspond to those of midoceanic ridge environments. Accordingly, CH4 may have been primarily generated through the reduction of CO, by H, in the absence of any organic matter following a Fischer-Tropsch-type reaction. However, primary occurrence of minor amounts of thermogenic CH4 and subsequent re-equilibration with co-existing CO2 cannot be ruled out entirely- CO2/He-3 ratios and delta(13)C(CO2) values imply that the evolved CO2 either derives from a metasomatized mantle or is a mixture between two components, one outgassing from an unaltered mantle and the other released by thermal breakdown of marine carbonates. The latter may contain traces of organic matter possibly decomposing to CH4 during thermometamorphism. Copyright (C) 2004 Elsevier Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Finnish food producers' trade with Russia has experienced profound changes since the collapse of the Soviet Union. Simultaneously, the distribution systems of foodstuffs have changed remarkably. This study sheds some light into these changes and analyses the current situation in distribution systems of foodstuffs in Russia. In addition, the study discusses the possibilities of Finnish food producers to get more of their products to the shelves of Russian food retail stores. Before the 1998 financial crisis, the import of foreign foodstuffs was booming in Russia due to the overvalued rouble. As a result of the financial crisis, food import collapsed. The export of Finnish foodstuffs to Russia has been slowly recovering during the past few years, but in the most important product categories the pre-crisis levels have so far not been reached and maybe will not be reached. In certain product categories the growth has been only marginal. It seems that starting localproduction will become increasingly important in the future. This is further encouraged by the fact that Russian consumers favour domestic food products. Russian consumers are very price conscious and demand quality in food products. The perceived price-quality ratio is an important criterion in the purchase decision.The majority of foodstuff retail is still conducted via unorganised forms of trade (e.g. kiosks and marketplaces) but modern retail chains are developing at a fast pace in Russia. They are also expected to dominate the retail trade in foodstuffs over the unorganised forms of trade in the future. This will change the distribution systems as well. The retail chains are trying to shorten the distribution chain, similarly to what has been seen in the Western countries. This together with the strengthening of retail chains is likely to shrink the role of wholesalers, as the chains increasingly want to work directly with the producers. Many large retail chains are acquiring or have already acquired a distribution centre or centres in order to boost efficiency and control the flow of products. The strengthening of the retail chains also gives them power in negotiations, which the producers and distributors have to adjust to. For example store entry fees and retail chains' own private label products pose challenges to the food producers. In the food production sector the competition is fierce, as large Russianand foreign producers want to ensure their piece of the market. The largest producers utilise their size: they invest in big marketing campaigns and are willing to pay high entry fees to retail chains in order to secure a place on the store shelves and to build a strong brand in Russia. This complicates the situation from the viewpoint of small producers. Currently, the most popular type of distribution system among the interviewed Finnish food producers is based on a network of local distributors. There is, however, a strong consensus on the importanceof starting local production in order to be a serious actor in Russia in the future. Factors that hinder the starting of local production include the lack of local infrastructure and qualified staff, and the low risk tolerance of Finnish firms. Major barriers for entry in Russia are the actions of authorities, fierce competition, fragmented market and Finnish producers' heavy production costs. The suggested strategies for increasing the market share include focusing geographically or segment-wise, introducing new products, starting local production, andcooperation between Finnish producers. Smallness was one reason why Finnish producers had to cut down their operations in Russia due to the 1998 crisis. Smaller producers had fewer resources to tolerate losses during the period of crisis. Smallness is reflected also on trade negotiations with retail chains and distributors. It makes it harder to cope with the store entry fees and to differentiatefrom the mass of products propped up by expensive advertising. Finally, it makes it harder for Finnish producers to start or expand local production, as it is more difficult for a small producer to get financing and to tolerate the increased risks. Compensating for the smallness might become the crucial factor determining the future success of Finnish food producers in the Russian market.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In two previous papers [J. Differential Equations, 228 (2006), pp. 530 579; Discrete Contin. Dyn. Syst. Ser. B, 6 (2006), pp. 1261 1300] we have developed fast algorithms for the computations of invariant tori in quasi‐periodic systems and developed theorems that assess their accuracy. In this paper, we study the results of implementing these algorithms and study their performance in actual implementations. More importantly, we note that, due to the speed of the algorithms and the theoretical developments about their reliability, we can compute with confidence invariant objects close to the breakdown of their hyperbolicity properties. This allows us to identify a mechanism of loss of hyperbolicity and measure some of its quantitative regularities. We find that some systems lose hyperbolicity because the stable and unstable bundles approach each other but the Lyapunov multipliers remain away from 1. We find empirically that, close to the breakdown, the distances between the invariant bundles and the Lyapunov multipliers which are natural measures of hyperbolicity depend on the parameters, with power laws with universal exponents. We also observe that, even if the rigorous justifications in [J. Differential Equations, 228 (2006), pp. 530-579] are developed only for hyperbolic tori, the algorithms work also for elliptic tori in Hamiltonian systems. We can continue these tori and also compute some bifurcations at resonance which may lead to the existence of hyperbolic tori with nonorientable bundles. We compute manifolds tangent to nonorientable bundles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lying at the core of statistical physics is the need to reduce the number of degrees of freedom in a system. Coarse-graining is a frequently-used procedure to bridge molecular modeling with experiments. In equilibrium systems, this task can be readily performed; however in systems outside equilibrium, a possible lack of equilibration of the eliminated degrees of freedom may lead to incomplete or even misleading descriptions. Here, we present some examples showing how an improper coarse-graining procedure may result in linear approaches to nonlinear processes, miscalculations of activation rates and violations of the fluctuation-dissipation theorem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reaaliaikaisten käyttöjärjestelmien käyttö sulautetuissa järjestelmissä on kasvamassa koko ajan. Sulautettuja tietokoneita käytetään yhä useammassa kohteessa kuten sähkökäyttöjen ohjauksessa. Sähkökäyttöjen ohjaus hoidetaan nykyisin yleensä nopealla digitaalisella signaaliprosessorilla (DSP), jolloin ohjelmointi ja päivittäminen on hidasta ja vaikeaa johtuen käytettävästä matalan tason Assembler-kielestä. Ratkaisuna yleiskäyttöisten prosessorien ja reaaliaikakäyttöjärjestelmien käyttö. Kaupalliset reaaliaikakäyttöjärjestelmät ovat kalliita ja lähdekoodin saaminen omaan käyttöön jopa mahdotonta. Linux on ei-kaupallinen avoimen lähdekoodin käyttöjärjestelmä, joten sen käyttö on ilmaista ja sitä voi muokata vapaasti. Linux:iin on saatavana useita laajennuksia, jotka tekevät siitä reaaliaikaisen käyttöjärjestelmän. Vaihtoehtoina joko kova (hard) tai pehmeä (soft) reaaliaikaisuus. Linux:iin on olemassa valmiita kehitysympäristöjä mutta ne kaipaavat parannusta ennen kuin niitä voidaan käyttää suuressa mittakaavassa teollisuudessa. Reaaliaika Linux ei sovellus nopeisiin ohjauslooppeihin (<100 ms) koska nopeus ei riitä vielä mutta nopeus kasvaa samalla kun prosessorit kehittyvät. Linux soveltuu hyvin rajapinnaksi nopean ohjauksen ja käyttäjän välille ja hitaampaan ohjaukseen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present work is a part of the large project with purpose to qualify the Flash memory for automotive application using a standardized test and measurement flow. High memory reliability and data retention are the most critical parameters in this application. The current work covers the functional tests and data retention test. The purpose of the data retention test is to obtain the data retention parameters of the designed memory, i.e. the maximum time of information storage at specified conditions without critical charge leakage. For this purpose the charge leakage from the cells, which results in decrease of cells threshold voltage, was measured after a long-time hightemperature treatment at several temperatures. The amount of lost charge for each temperature was used to calculate the Arrhenius constant and activation energy for the discharge process. With this data, the discharge of the cells at different temperatures during long time can be predicted and the probability of data loss after years can be calculated. The memory chips, investigated in this work, were 0.035 μm CMOS Flash memory testchips, designed for further use in the Systems-on-Chips for automotive electronics.