930 resultados para Efficient Production Scale
Resumo:
Les reconstructions palinspastiques fournissent le cadre idéal à de nombreuses études géologiques, géographiques, océanographique ou climatiques. En tant qu?historiens de la terre, les "reconstructeurs" essayent d?en déchiffrer le passé. Depuis qu?ils savent que les continents bougent, les géologues essayent de retracer leur évolution à travers les âges. Si l?idée originale de Wegener était révolutionnaire au début du siècle passé, nous savons depuis le début des années « soixante » que les continents ne "dérivent" pas sans but au milieu des océans mais sont inclus dans un sur-ensemble associant croûte « continentale » et « océanique »: les plaques tectoniques. Malheureusement, pour des raisons historiques aussi bien que techniques, cette idée ne reçoit toujours pas l'écho suffisant parmi la communauté des reconstructeurs. Néanmoins, nous sommes intimement convaincus qu?en appliquant certaines méthodes et certains principes il est possible d?échapper à l?approche "Wégenerienne" traditionnelle pour enfin tendre vers la tectonique des plaques. Le but principal du présent travail est d?exposer, avec tous les détails nécessaires, nos outils et méthodes. Partant des données paléomagnétiques et paléogéographiques classiquement utilisées pour les reconstructions, nous avons développé une nouvelle méthodologie replaçant les plaques tectoniques et leur cinématique au coeur du problème. En utilisant des assemblages continentaux (aussi appelés "assemblées clés") comme des points d?ancrage répartis sur toute la durée de notre étude (allant de l?Eocène jusqu?au Cambrien), nous développons des scénarios géodynamiques permettant de passer de l?une à l?autre en allant du passé vers le présent. Entre deux étapes, les plaques lithosphériques sont peu à peu reconstruites en additionnant/ supprimant les matériels océaniques (symbolisés par des isochrones synthétiques) aux continents. Excepté lors des collisions, les plaques sont bougées comme des entités propres et rigides. A travers les âges, les seuls éléments évoluant sont les limites de plaques. Elles sont préservées aux cours du temps et suivent une évolution géodynamique consistante tout en formant toujours un réseau interconnecté à travers l?espace. Cette approche appelée "limites de plaques dynamiques" intègre de multiples facteurs parmi lesquels la flottabilité des plaques, les taux d'accrétions aux rides, les courbes de subsidence, les données stratigraphiques et paléobiogéographiques aussi bien que les évènements tectoniques et magmatiques majeurs. Cette méthode offre ainsi un bon contrôle sur la cinématique des plaques et fournit de sévères contraintes au modèle. Cette approche "multi-source" nécessite une organisation et une gestion des données efficaces. Avant le début de cette étude, les masses de données nécessaires était devenues un obstacle difficilement surmontable. Les SIG (Systèmes d?Information Géographiques) et les géo-databases sont des outils informatiques spécialement dédiés à la gestion, au stockage et à l?analyse des données spatialement référencées et de leurs attributs. Grâce au développement dans ArcGIS de la base de données PaleoDyn nous avons pu convertir cette masse de données discontinues en informations géodynamiques précieuses et facilement accessibles pour la création des reconstructions. Dans le même temps, grâce à des outils spécialement développés, nous avons, tout à la fois, facilité le travail de reconstruction (tâches automatisées) et amélioré le modèle en développant fortement le contrôle cinématique par la création de modèles de vitesses des plaques. Sur la base des 340 terranes nouvellement définis, nous avons ainsi développé un set de 35 reconstructions auxquelles est toujours associé un modèle de vitesse. Grâce à cet ensemble de données unique, nous pouvons maintenant aborder des problématiques majeurs de la géologie moderne telles que l?étude des variations du niveau marin et des changements climatiques. Nous avons commencé par aborder un autre problème majeur (et non définitivement élucidé!) de la tectonique moderne: les mécanismes contrôlant les mouvements des plaques. Nous avons pu observer que, tout au long de l?histoire de la terre, les pôles de rotation des plaques (décrivant les mouvements des plaques à la surface de la terre) tendent à se répartir le long d'une bande allant du Pacifique Nord au Nord de l'Amérique du Sud, l'Atlantique Central, l'Afrique du Nord, l'Asie Centrale jusqu'au Japon. Fondamentalement, cette répartition signifie que les plaques ont tendance à fuir ce plan médian. En l'absence d'un biais méthodologique que nous n'aurions pas identifié, nous avons interprété ce phénomène comme reflétant l'influence séculaire de la Lune sur le mouvement des plaques. La Lune sur le mouvement des plaques. Le domaine océanique est la clé de voute de notre modèle. Nous avons attaché un intérêt tout particulier à le reconstruire avec beaucoup de détails. Dans ce modèle, la croûte océanique est préservée d?une reconstruction à l?autre. Le matériel crustal y est symbolisé sous la forme d?isochrones synthétiques dont nous connaissons les âges. Nous avons également reconstruit les marges (actives ou passives), les rides médio-océaniques et les subductions intra-océaniques. En utilisant ce set de données très détaillé, nous avons pu développer des modèles bathymétriques 3-D unique offrant une précision bien supérieure aux précédents.<br/><br/>Palinspastic reconstructions offer an ideal framework for geological, geographical, oceanographic and climatology studies. As historians of the Earth, "reconstructers" try to decipher the past. Since they know that continents are moving, geologists a trying to retrieve the continents distributions through ages. If Wegener?s view of continent motions was revolutionary at the beginning of the 20th century, we know, since the Early 1960?s that continents are not drifting without goal in the oceanic realm but are included in a larger set including, all at once, the oceanic and the continental crust: the tectonic plates. Unfortunately, mainly due to technical and historical issues, this idea seems not to receive a sufficient echo among our particularly concerned community. However, we are intimately convinced that, by applying specific methods and principles we can escape the traditional "Wegenerian" point of view to, at last, reach real plate tectonics. This is the main aim of this study to defend this point of view by exposing, with all necessary details, our methods and tools. Starting with the paleomagnetic and paleogeographic data classically used in reconstruction studies, we developed a modern methodology placing the plates and their kinematics at the centre of the issue. Using assemblies of continents (referred as "key assemblies") as anchors distributed all along the scope of our study (ranging from Eocene time to Cambrian time) we develop geodynamic scenarios leading from one to the next, from the past to the present. In between, lithospheric plates are progressively reconstructed by adding/removing oceanic material (symbolized by synthetic isochrones) to major continents. Except during collisions, plates are moved as single rigid entities. The only evolving elements are the plate boundaries which are preserved and follow a consistent geodynamical evolution through time and form an interconnected network through space. This "dynamic plate boundaries" approach integrates plate buoyancy factors, oceans spreading rates, subsidence patterns, stratigraphic and paleobiogeographic data, as well as major tectonic and magmatic events. It offers a good control on plate kinematics and provides severe constraints for the model. This multi-sources approach requires an efficient data management. Prior to this study, the critical mass of necessary data became a sorely surmountable obstacle. GIS and geodatabases are modern informatics tools of specifically devoted to store, analyze and manage data and associated attributes spatially referenced on the Earth. By developing the PaleoDyn database in ArcGIS software we converted the mass of scattered data offered by the geological records into valuable geodynamical information easily accessible for reconstructions creation. In the same time, by programming specific tools we, all at once, facilitated the reconstruction work (tasks automation) and enhanced the model (by highly increasing the kinematic control of plate motions thanks to plate velocity models). Based on the 340 terranes properly defined, we developed a revised set of 35 reconstructions associated to their own velocity models. Using this unique dataset we are now able to tackle major issues of the geology (such as the global sea-level variations and climate changes). We started by studying one of the major unsolved issues of the modern plate tectonics: the driving mechanism of plate motions. We observed that, all along the Earth?s history, plates rotation poles (describing plate motions across the Earth?s surface) tend to follow a slight linear distribution along a band going from the Northern Pacific through Northern South-America, Central Atlantic, Northern Africa, Central Asia up to Japan. Basically, it sighifies that plates tend to escape this median plan. In the absence of a non-identified methodological bias, we interpreted it as the potential secular influence ot the Moon on plate motions. The oceanic realms are the cornerstone of our model and we attached a particular interest to reconstruct them with many details. In this model, the oceanic crust is preserved from one reconstruction to the next. The crustal material is symbolised by the synthetic isochrons from which we know the ages. We also reconstruct the margins (active or passive), ridges and intra-oceanic subductions. Using this detailed oceanic dataset, we developed unique 3-D bathymetric models offering a better precision than all the previously existing ones.
Resumo:
Finland has large forest fuel resources. However, the use of forest fuels for energy production has been low, except for small-scale use in heating. According to national action plans and programs related to wood energy promotion, the utilization of such resources will be multiplied over the next few years. The most significant part of this growth will be based on the utilization of forest fuels, produced from logging residues of regeneration fellings, in industrial and municipal power and heating plants. Availability of logging residues was analyzed by means of resource and demand approaches in order to identify the most suitable regions with focus on increasing the forest fuel usage. The analysis included availability and supply cost comparisons between power plant sites and resource allocation in a least cost manner, and between a predefined power plant structure under demand and supply constraints. Spatial analysis of worksite factors and regional geographies were carried out using the GIS-model environment via geoprocessing and cartographic modeling tools. According to the results of analyses, the cost competitiveness of forest fuel supply should be improved in order to achieve the designed objectives in the near future. Availability and supply costs of forest fuels varied spatially and were very sensitive to worksite factors and transport distances. According to the site-specific analysis the supply potential between differentlocations can be multifold. However, due to technical and economical reasons ofthe fuel supply and dense power plant infrastructure, the supply potential is limited at plant level. Therefore, the potential and supply cost calculations aredepending on site-specific matters, where regional characteristics of resourcesand infrastructure should be taken into consideration, for example by using a GIS-modeling approach constructed in this study.
Resumo:
Combustion of wood is increasing because of the needs of decreasing the emissions of carbon dioxide and the amount of waste going to landfills. Wood based fuels are often scattered on a large area. The transport distances should be short enough to prevent too high costs, and so the size of heating and power plants using wood fuels is often rather small. Combustion technologies of small-size units have to be developed to reach efficient and environmentally friendly energy production. Furnaces that use different packed bed combustion or gasification techniques areoften most economic in small-scale energy production. Ignition front propagation rate affects the stability, heat release rate and emissions of packed bed combustion. Ignition front propagation against airflow in packed beds of wood fuels has been studied. The research has been carried out mainly experimentally. Theoretical aspects have been considered to draw conclusions about the experimental results. The effects of airflow rate, moisture content of the fuel, size, shape and density of particles, and porosity of the bed on the propagation rate of the ignition front have been studied. The experiments were carried out in a pot furnace. The fuels used in the experiments were mainly real wood fuels that are often burned in the production of energy. The fuel types were thin wood chips, saw dust, shavings, wood chips, and pellets with different sizes. Also a few mixturesof the above were tested. Increase in the moisture content of the fuel decreases the propagation rates of the ignition front and makes the range of possible airflow rates narrower because of the energy needed for the evaporation of water and the dilution of volatile gases due to evaporated steam. Increase in the airflow rate increases the ignition rate until a maximum rate of propagation is reached after which it decreases. The maximum flame propagation rate is not always reached in stoichiometric combustion conditions. Increase in particle size and density transfers the optimum airflow rate towards fuel lean conditions. Mixing of small and large particles is often advantageous, because small particles make itpossible to reach the maximum ignition rate in fuel rich conditions, and large particles widen the range of possible airflow rates. A correlation was found forthe maximum rate of ignition front propagation in different wood fuels. According to the correlation, the maximum ignition mass flux is increased when the sphericity of the particles and the porosity of the bed are increased and the moisture content of the fuel is decreased. Another fit was found between sphericity and porosity. Increase in sphericity decreases the porosity of the bed. The reasons of the observed results are discussed.
Resumo:
Työssä kartoitetaan käytössä olevia pilkkeen ja hakkeen keinokuivausmenetelmiä. Lisäksi arvioidaan menetelmien energiankulutusta ja kustannuksia, sekä käydään läpi kuivaajan suunnittelussa huomioon otettavia seikkoja. Työn ohessa on tehty Excel-laskentataulukko, jonka avulla voidaan arvioida lämpöyrittäjyyden kannattavuutta koko tuotantoketju huomioon ottaen. Lopussa tutkitaan kolmen erityyppisen pilkekuivurin käyttöä ja arvioidaan laskentataulukon avulla niiden vaikutuksia pilkeyrittäjän talouteen. Yleisin puupolttoaineiden keinokuivausmenetelmä on kylmäilmakuivaus. Sääriippuvuudesta ja usein epätasaisesta kuivauslaadusta johtuen se soveltuu vain pienimuotoiseen ja sivutoimiseen polttoainetuotantoon. Lisälämmityksellä parannetaan ilman kuivauskykyä, jolloin kuivaus on nopeampaa, loppukosteudet alhaisempia ja vuotuinen käyttöaika pitempi. Lämmitysratkaisun valinta riippuu kuivurin halutusta vuotuisesta käyttöajasta ja tuotantomääristä. Ammattimaiseen ja ympärivuotiseen pilketuotantoon soveltuu parhaiten korkeita, 70 - 90 °C lämpötiloja käyttävä kuivuri. Korkealämpötila-kuivurissa on tärkeää huolehtia riittävästä eristyksestä ja säädellystä ilmanvaihdosta. Suurilla polttopuun tuotantomäärillä kuljetuskustannukset korostuvat. Samalla kasvaa markkinoinnin tarve. Mainonnassa voidaan hyödyntää tehokasta kuivausmenetelmää.
Resumo:
Työn tavoitteena oli synkronoida yleinen materiaalivirta tuotannon kanssa. Pyrkimyksenä oli virtaviivaistaa materiaalivirta, jotta materiaalin määrä, sekä turhat työvaiheet tuotantoalueella vähenisivät. Päämääränä oli vähentää materiaalilavojen määrää lattialla, sekä parantaa materiaalitoimitusten OTD (On-Time-Delivery) prosenttia.Teoreettinen osa käsittelee nykypäivän toimitusketjun tärkeimpiä elementtejä. Keskeisenä asiana perehdytään materiaalivirtaan ja sen tehokkaaseen hallintaan. Työ esittelee myös synkronisen johtamismallin periaatteet, sekä materiaalivirran synkronoimisen tuotannon kanssa.Empiirinen osuus kuvaa yrityksen materiaalihallinnan nykypäivän tilanteen, sen keskeiset ongelmat, sekä uuden toimintamalliehdotuksen. Työ esittelee kaksi pilottia, joiden tulokset varmistivat uuden virtaviivaistetun materiaalivirran ja täydennysmallin toimivuuden.Työn tulokset osoittavat, miten uusi materiaalin täydennysmalli vähentää materiaalilavojen, sekä jalostamattoman työn määrää tuotantoalueella.
Resumo:
Tämän diplomityön tarkoituksena oli selvittää kustannustehokkaita keinoja uuteaineiden vähentämiseksi koivusulfaattimassasta. Uuteaineet voivat aiheuttaa ongelmia muodostaessaan saostumia prosessilaitteisiin. Saostumat aiheuttavat tukkeumia ja mittaushäiriöitä, mutta irrotessaan ne myös huonontavat sellun laatua. Lopputuotteeseen joutuessaan ne voivat lisäksi aiheuttaa haju- ja makuhaittoja, joilla on erityistä merkitystä esimerkiksi valmistettaessa elintarvikekartonkeja. Tämä työ tehtiin Stora Enson sellutehtaalla, Enocell Oy:llä, Uimaharjussa. Teoriaosassa käsiteltiin uuteaineiden koostumusta ja niiden aiheuttamia ongelmia sellu– ja paperitehtaissa. Lisäksi koottiin aikaisempien tehdaskokeiden fysikaalisia ja kemiallisia keinoja vähentää koivu-uutetta. Tarkastelualueina olivat puunkäsittely, keitto, pesemö ja valkaisu. Kokeellisessa osassa suoritettiin esikokeita laboratorio- ja tehdasmittakaavassa, jotta saavutettaisiin käytännöllistä tietoa itse lopuksi tehtävää tehdasmittakaavan koetta varten. Laboratoriokokeissa tutkittiin mm. keiton kappaluvun, lisäaineiden ja hartsisaippuan vaikutusta koivu-uutteeseen. Lisäksi suoritettiin myös happo- (A) ja peretikkahappovaiheen (Paa) laboratoriokokeet. Tehdasmittakaavassa tarkasteltiin mm. keiton kappaluvun, pesemön lämpötilan, A-vaiheen, valkaisun peroksidi- ja Paa-vaiheen vaikutusta koivu-uutteeseen. Uutteenpoistotehokkuutta eri menetelmien välillä vertailtiin niin määrällisesti kuin rahallisesti. Uutteenpoistotehokkuudella mitattuna vertailuvaihe oli tehokkain pesemön loppuvaiheessa ja valkaisun alkuvaiheessa. Pesemön loppuvaiheessa uutteenpoistoreduktiot olivat noin 30 % ja valkaisun alkuvaiheessa 40 %. Peroksidivaihe oli tehokkain käytettynä valkaisun loppuvaiheessa noin 40 % reduktiolla. Kustannustehokkuudella mitattuna tehokkaimmaksi osoittautui A-vaihe yhdessä peroksidivaiheen kanssa. Säästöt vertailujaksoon verrattuna olivat noin 0.3 €/ADt. Lisäksi kyseinen yhdistelmä osoittautui hyväksi keinoksi säilyttää uutetaso alle maksimirajan kuitulinja 2:lla, kun kuitulinjalla 1 tuotettiin samanaikaisesti armeeraussellua.
Resumo:
Engineered nanomaterials (ENMs) exhibit special physicochemical properties and thus are finding their way into an increasing number of industries, enabling products with improved properties. Their increased use brings a greater likelihood of exposure to the nanoparticles (NPs) that could be released during the life cycle of nano-abled products. The field of nanotoxicology has emerged as a consequence of the development of these novel materials, and it has gained ever more attention due to the urgent need to gather information on exposure to them and to understand the potential hazards they engender. However, current studies on nanotoxicity tend to focus on pristine ENMs, and they use these toxicity results to generalize risk assessments on human exposure to NPs. ENMs released into the environment can interact with their surroundings, change characteristics and exhibit toxicity effects distinct from those of pristine ENMs. Furthermore, NPs' large surface areas provide extra-large potential interfaces, thus promoting more significant interactions between NPs and other co-existing species. In such processes, other species can attach to a NP's surface and modify its surface functionality, in addition to the toxicity in normally exhibits. One particular occupational health scenario involves NPs and low-volatile organic compounds (LVOC), a common type of pollutant existing around many potential sources of NPs. LVOC can coat a NP's surface and then dominate its toxicity. One important mechanism in nanotoxicology is the creation of reactive oxygen species (ROS) on a NP's surface; LVOC can modify the production of these ROS. In summary, nanotoxicity research should not be limited to the toxicity of pristine NPs, nor use their toxicity to evaluate the health effects of exposure to environmental NPs. Instead, the interactions which NPs have with other environmental species should also be considered and researched. The potential health effects of exposure to NPs should be derived from these real world NPs with characteristics modified by the environment and their distinct toxicity. Failure to suitably address toxicity results could lead to an inappropriate treatment of nano- release, affect the environment and public health and put a blemish on the development of sustainable nanotechnologies as a whole. The main objective of this thesis is to demonstrate a process for coating NP surfaces with LVOC using a well-controlled laboratory design and, with regard to these NPs' capacity to generate ROS, explore the consequences of changing particle toxicity. The dynamic coating system developed yielded stable and replicable coating performance, simulating an important realistic scenario. Clear changes in the size distribution of airborne NPs were observed using a scanning mobility particle sizer, were confirmed using both liquid nanotracking analyses and transmission electron microscopy (TEM) imaging, and were verified thanks to the LVOC coating. Coating thicknesses corresponded to the amount of coating material used and were controlled using the parameters of the LVOC generator. The capacity of pristine silver NPs (Ag NPs) to generate ROS was reduced when they were given a passive coating of inert paraffin: this coating blocked the reactive zones on the particle surfaces. In contrast, a coating of active reduced-anthraquinone contributed to redox reactions and generated ROS itself, despite the fact that ROS generation due to oxidation by Ag NPs themselves was quenched. Further objectives of this thesis included development of ROS methodology and the analysis of ROS case studies. Since the capacity of NPs to create ROS is an important effect in nanotoxicity, we attempted to refine and standardize the use of 2'7-dichlorodihydrofluorescin (DCFH) as a chemical tailored for the characterization of NPs' capacity for ROS generation. Previous studies had reported a wide variety of results, which were due to a number of insufficiently well controlled factors. We therefore cross-compared chemicals and concentrations, explored ways of dispersing NP samples in liquid solutions, identified sources of contradictions in the literature and investigated ways of reducing artificial results. The most robust results were obtained by sonicating an optimal sample of NPs in a DCFH-HRP solution made of 5,M DCFH and 0.5 unit/ml horseradish peroxidase (HRP). Our findings explained how the major reasons for previously conflicting results were the different experimental approaches used and the potential artifacts appearing when using high sample concentrations. Applying our advanced DCFH protocol with other physicochemical characterizations and biological analyses, we conducted several case studies, characterizing aerosols and NP samples. Exposure to aged brake wear dust engenders a risk of potential deleterious health effects in occupational scenarios. We performed microscopy and elemental analyses, as well as ROS measurements, with acellular and cellular DCFH assays. TEM images revealed samples to be heterogeneous mixtures with few particles in the nano-scale. Metallic and non-metallic elements were identified, primarily iron, carbon and oxygen. Moderate amounts of ROS were detected in the cell-free fluorescent tests; however, exposed cells were not dramatically activated. In addition to their highly aged state due to oxidation, the reason aged brake wear samples caused less oxidative stress than fresh brake wear samples may be because of their larger size and thus smaller relative reactive surface area. Other case studies involving welding fumes and differently charged NPs confirmed the performance of our DCFH assay and found ROS generation linked to varying characteristics, especially the surface functionality of the samples. Les nanomatériaux manufacturés (ENM) présentent des propriétés physico-chimiques particulières et ont donc trouvés des applications dans un nombre croissant de secteurs, permettant de réaliser des produits ayant des propriétés améliorées. Leur utilisation accrue engendre un plus grand risque pour les êtres humains d'être exposés à des nanoparticules (NP) qui sont libérées au long de leur cycle de vie. En conséquence, la nanotoxicologie a émergé et gagné de plus en plus d'attention dû à la nécessité de recueillir les renseignements nécessaires sur l'exposition et les risques associés à ces nouveaux matériaux. Cependant, les études actuelles sur la nanotoxicité ont tendance à se concentrer sur les ENM et utiliser ces résultats toxicologiques pour généraliser l'évaluation des risques sur l'exposition humaine aux NP. Les ENM libérés dans l'environnement peuvent interagir avec l'environnement, changeant leurs caractéristiques, et montrer des effets de toxicité distincts par rapport aux ENM originaux. Par ailleurs, la grande surface des NP fournit une grande interface avec l'extérieur, favorisant les interactions entre les NP et les autres espèces présentes. Dans ce processus, d'autres espèces peuvent s'attacher à la surface des NP et modifier leur fonctionnalité de surface ainsi que leur toxicité. Un scénario d'exposition professionnel particulier implique à la fois des NP et des composés organiques peu volatils (LVOC), un type commun de polluant associé à de nombreuses sources de NP. Les LVOC peuvent se déposer sur la surface des NP et donc dominer la toxicité globale de la particule. Un mécanisme important en nanotoxicologie est la création d'espèces réactives d'oxygène (ROS) sur la surface des particules, et les LVOC peuvent modifier cette production de ROS. En résumé, la recherche en nanotoxicité ne devrait pas être limitée à la toxicité des ENM originaux, ni utiliser leur toxicité pour évaluer les effets sur la santé de l'exposition aux NP de l'environnement; mais les interactions que les NP ont avec d'autres espèces environnementales doivent être envisagées et étudiées. Les effets possibles sur la santé de l'exposition aux NP devraient être dérivés de ces NP aux caractéristiques modifiées et à la toxicité distincte. L'utilisation de résultats de toxicité inappropriés peut conduire à une mauvaise prise en charge de l'exposition aux NP, de détériorer l'environnement et la santé publique et d'entraver le développement durable des industries de la nanotechnologie dans leur ensemble. L'objectif principal de cette thèse est de démontrer le processus de déposition des LVOC sur la surface des NP en utilisant un environnement de laboratoire bien contrôlé et d'explorer les conséquences du changement de toxicité des particules sur leur capacité à générer des ROS. Le système de déposition dynamique développé a abouti à des performances de revêtement stables et reproductibles, en simulant des scénarios réalistes importants. Des changements clairs dans la distribution de taille des NP en suspension ont été observés par spectrométrie de mobilité électrique des particules, confirmé à la fois par la méthode dite liquid nanotracking analysis et par microscopie électronique à transmission (MET), et a été vérifié comme provenant du revêtement par LVOC. La correspondance entre l'épaisseur de revêtement et la quantité de matériau de revêtement disponible a été démontré et a pu être contrôlé par les paramètres du générateur de LVOC. La génération de ROS dû aux NP d'argent (Ag NP) a été diminuée par un revêtement passif de paraffine inerte bloquant les zones réactives à la surface des particules. Au contraire, le revêtement actif d'anthraquinone réduit a contribué aux réactions redox et a généré des ROS, même lorsque la production de ROS par oxydation des Ag NP avec l'oxygène a été désactivé. Les objectifs associés comprennent le développement de la méthodologie et des études de cas spécifique aux ROS. Etant donné que la capacité des NP à générer des ROS contribue grandement à la nanotoxicité, nous avons tenté de définir un standard pour l'utilisation de 27- dichlorodihydrofluorescine (DCFH) adapté pour caractériser la génération de ROS par les NP. Des etudes antérieures ont rapporté une grande variété de résultats différents, ce qui était dû à un contrôle insuffisant des plusieurs facteurs. Nous avons donc comparé les produits chimiques et les concentrations utilisés, exploré les moyens de dispersion des échantillons HP en solution liquide, investigué les sources de conflits identifiées dans les littératures et étudié les moyens de réduire les résultats artificiels. De très bon résultats ont été obtenus par sonication d'une quantité optimale d'échantillons de NP en solution dans du DCFH-HRP, fait de 5 nM de DCFH et de 0,5 unité/ml de Peroxydase de raifort (HRP). Notre étude a démontré que les principales raisons causant les conflits entre les études précédemment conduites dans la littérature étaient dues aux différentes approches expérimentales et à des artefacts potentiels dus à des concentrations élevées de NP dans les échantillons. Utilisant notre protocole DCFH avancé avec d'autres caractérisations physico-chimiques et analyses biologiques, nous avons mené plusieurs études de cas, caractérisant les échantillons d'aérosols et les NP. La vielle poussière de frein en particulier présente un risque élevé d'exposition dans les scénarios professionnels, avec des effets potentiels néfastes sur la santé. Nous avons effectué des analyses d'éléments et de microscopie ainsi que la mesure de ROS avec DCFH cellulaire et acellulaire. Les résultats de MET ont révélé que les échantillons se présentent sous la forme de mélanges de particules hétérogènes, desquels une faible proportion se trouve dans l'échelle nano. Des éléments métalliques et non métalliques ont été identifiés, principalement du fer, du carbone et de l'oxygène. Une quantité modérée de ROS a été détectée dans le test fluorescent acellulaire; cependant les cellules exposées n'ont pas été très fortement activées. La raison pour laquelle les échantillons de vielle poussière de frein causent un stress oxydatif inférieur par rapport à la poussière de frein nouvelle peut-être à cause de leur plus grande taille engendrant une surface réactive proportionnellement plus petite, ainsi que leur état d'oxydation avancé diminuant la réactivité. D'autres études de cas sur les fumées de soudage et sur des NP différemment chargées ont confirmé la performance de notre test DCFH et ont trouvé que la génération de ROS est liée à certaines caractéristiques, notamment la fonctionnalité de surface des échantillons.
Resumo:
Studies regarding the field of this work aim to substitute industrial mechanical conveyors with pneumatic conveyors to overcome the disadvantages in solids flow regulation and risks posed to production and health. The experimental part of this work examines how the granular material properties, fluidizing airflow rate, equipment geometry, and pressures along the pipes affect the mass flow rate through the system. The results are compared with those obtained from previous experiments conducted with alumina. Experiments were carried out with a pilot scale downer-riser system at Outotec Research Center Frankfurt. Granular materi-als used in this work are named as sand, ilmenite, iron ore 1 and iron ore 2.
Resumo:
Worldwide, about half the adult population is considered overweight as defined by a body mass index (BMI - calculated by body weight divided by height squared) ratio in excess of 25 kg.m-2. Of these individuals, half are clinically obese (with a BMI in excess of 30) and these numbers are still increasing, notably in developing countries such as those of the Middle East region. Obesity is a disorder characterised by increased mass of adipose tissue (excessive fat accumulation) that is the result of a systemic imbalance between food intake and energy expenditure. Although factors such as family history, sedentary lifestyle, urbanisation, income and family diet patterns determine obesity prevalence, the main underlying causes are poor knowledge about food choice and lack of physical activity3. Current obesity treatments include dietary restriction, pharmacological interventions and ultimately, bariatric surgery. The beneficial effects of physical activity on weight loss through increased energy expenditure and appetite modulation are also firmly established. Another viable option to induce a negative energy balance, is to incorporate hypoxia per se or combine it with exercise in an individual's daily schedule. This article will present recent evidence suggesting that combining hypoxic exposure and exercise training might provide a cost-effective strategy for reducing body weight and improving cardio-metabolic health in obese individuals. The efficacy of this approach is further reinforced by epidemiological studies using large-scale databases, which evidence a negative relationship between altitude of habitation and obesity. In the United States, for instance, obesity prevalence is inversely associated with altitude of residence and urbanisation, after adjusting for temperature, diet, physical activity, smoking and demographic factors.
Resumo:
Peer-reviewed
Resumo:
Traditionally, fossil fuels have always been the major sources of the modern energy production. However prices on these energy sources have been constantly increasing. The utilization of local biomass resources for energy production can substitute significant part of the required energy demand in different energy sectors. The introduction of the biomass usage can easily be started in the forest industry first as it possesses biomass in a large volume. The forest industry energy sector has the highest potential for the fast bioenergy development in the North-West Russia. Therefore, the question concerning rational and effective forest resources use is important today as well as the utilization of the forestry by-products. This work describes and analyzes the opportunities of utilising biomass, mainly, in the form of the wood by-products, for energy production processes in general, as well as for the northwest Russian forest industry conditions. The study also covers basic forest industry processes and technologies, so, the reader can get familiar with the information about the specific character of the biomass utilization. The work gives a comprehensive view on the northwest forest industry situation from the biomass utilisation point of view. By presenting existing large-scale sawmills and pulp and paper mills the work provides information for the evaluation of the future development of CHP investments in the northwest Russian forest industry.
Resumo:
An improved defoamer dosage procedure and a more efficient dosing point to the approach system were studied in this thesis. Their influence on paper machine wet end operations was investigated. The improved defoamer dosing procedure was examined at UMP-Kymmene Tervasaari PM8. Air content and its controlling methods at the paper machine were studied in the literature survey. Also the influence of dissolved gases and entrained air in the papermaking furnish were introduced. Feeding methods – a TrumpJet chemical mixer and traditional feeding devices – were reviewed. The defoamer’s functioning methods were studied. The influence of the use of defoamers was estimated based on the main selected wet end operations. In the experimental part, defoamer mixing with a traditional feeding method and two improved mixing stages were compared based on the air content profiles in PM8’s approach system. The reference dosage procedure was PM8’s old dosing system. The first dosage procedure in the comparison involved two TrumpJet chemical mixers installed on the bottom wire trays. The second element of comparison involved the improvement brought by a third TrumpJet chemical mixer installed on the top wire tray. This second comparison of the efficient defoamer feeding concept was made at a higher production speed of PM8. The air content control situation was also studied at the higher production speed. In addition the connection between the defoamer and air content was observed and a mill-scale system was studied. The economical benefits of the new dosing procedure were also reviewed. Air content profiles of short circulation were measured in the reference situation and the two comparison points of the study. These air content measurements proved the main gas load is introduced to PM8's paper furnish from the white water tray. Thick stock air content was not essential when the air volume flow was considered. The improved defoamer dosing procedure made lower dosage amounts possible. Compared with the traditional feeding system, the new defoamer feeding concept made only few direct improvements to the wet end operations and the produced paper itself. The lower defoamer need was noticed to have a positive influence on hydrophobic sizing and paper defects. The surfaces of the white water tanks and the operation of pumps were assessed based on the density variations of the suspension. The temperature in the white water silo was stated to have a significant influence on the air content measured in the first centrifugal cleaning stage.
Resumo:
The study is focused on the opportunity to improve the power performance from black liquor at Kraft pulp mills. The first part of the paper includes an overview of a traditional recovery system, its development and indication of the integral drawbacks which provoke the search for more efficient methods of black liquor treatment. The second part is devoted to the investigation of black liquor gasification as a technology able to increase electric energy generation at pulp mills. In addition, a description of two most promising gasification processes and their comparison to each other are presented. The paper is based on a literature review and interviews of specialists in this field. The findings showed that while the modern recovery system meets demands of the pulp mills, pressurized oxygen-blown black liquor gasification has good potential to be used as an alternative technology, increasing the power output from black liquor.
Resumo:
Coal, natural gas and petroleum-based liquid fuels are still the most widely used energy sources in modern society. The current scenario contrasts with the foreseen shortage of petroleum that was spread out in the beginning of the XXI century, when the concept of "energy security" emerged as an urgent agenda to ensure a good balance between energy supply and demand. Much beyond protecting refineries and oil ducts from terrorist attacks, these issues soon developed to a portfolio of measures related to process sustainability, involving at least three fundamental dimensions: (a) the need for technological breakthroughs to improve energy production worldwide; (b) the improvement of energy efficiency in all sectors of modern society; and (c) the increase of the social perception that education is a key-word towards a better use of our energy resources. Together with these technological, economic or social issues, "energy security" is also strongly influenced by environmental issues involving greenhouse gas emissions, loss of biodiversity in environmentally sensitive areas, pollution and poor solid waste management. For these and other reasons, the implementation of more sustainable practices in our currently available industrial facilities and the search for alternative energy sources that could partly replace the fossil fuels became a major priority throughout the world. Regarding fossil fuels, the main technological bottlenecks are related to the exploitation of less accessible petroleum resources such as those in the pre-salt layer, ranging from the proper characterization of these deep-water oil reservoirs, the development of lighter and more efficient equipment for both exploration and exploitation, the optimization of the drilling techniques, the achievement of further improvements in production yields and the establishment of specialized training programs for the technical staff. The production of natural gas from shale is also emerging in several countries but its production in large scale has several problems ranging from the unavoidable environmental impact of shale mining as well as to the bad consequences of its large scale exploitation in the past. The large scale use of coal has similar environmental problems, which are aggravated by difficulties in its proper characterization. Also, the mitigation of harmful gases and particulate matter that are released as a result of combustion is still depending on the development of new gas cleaning technologies including more efficient catalysts to improve its emission profile. On the other hand, biofuels are still struggling to fulfill their role in reducing our high dependence on fossil fuels. Fatty acid alkyl esters (biodiesel) from vegetable oils and ethanol from cane sucrose and corn starch are mature technologies whose market share is partially limited by the availability of their raw materials. For this reason, there has been a great effort to develop "second-generation" technologies to produce methanol, ethanol, butanol, biodiesel, biogas (methane), bio-oils, syngas and synthetic fuels from lower grade renewable feedstocks such as lignocellulosic materials whose consumption would not interfere with the rather sensitive issues of food security. Advanced fermentation processes are envisaged as "third generation" technologies and these are primarily linked to the use of algae feedstocks as well as other organisms that could produce biofuels or simply provide microbial biomass for the processes listed above. Due to the complexity and cost of their production chain, "third generation" technologies usually aim at high value added biofuels such as biojet fuel, biohydrogen and hydrocarbons with a fuel performance similar to diesel or gasoline, situations in which the use of genetically modified organisms is usually required. In general, the main challenges in this field could be summarized as follows: (a) the need for prospecting alternative sources of biomass that are not linked to the food chain; (b) the intensive use of green chemistry principles in our current industrial activities; (c) the development of mature technologies for the production of second and third generation biofuels; (d) the development of safe bioprocesses that are based on environmentally benign microorganisms; (e) the scale-up of potential technologies to a suitable demonstration scale; and (f) the full understanding of the technological and environmental implications of the food vs. fuel debate. On the basis of these, the main objective of this article is to stimulate the discussion and help the decision making regarding "energy security" issues and their challenges for modern society, in such a way to encourage the participation of the Brazilian Chemistry community in the design of a road map for a safer, sustainable and prosper future for our nation.
Resumo:
The effect of different heterogeneous catalysts on the microwave-assisted transesterification of sunflower oil for the production of methylic biodiesel in a monomode microwave reactor is described. The experiments were carried out at 70 ºC with a 16:1 methanolsunflower oil molar ratio and different heterogeneous basic and acidic catalysts. The results showed that the microwave-heated reactions occur up to four times faster than those carried out with conventional heating. The reactions were performed with 24 catalysts; pure calcium oxide (CaO) and potassium carbonate, either pure or supported by alumina (K2CO3/Al2O3), were the most efficient catalysts.