999 resultados para wet test chamber


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Uusi EPR-reaktorikonsepti on suunniteltu selviytymään tapauksista, joissa reaktorinsydän sulaa ja sula puhkaisee paineastian. Suojarakennuksen sisälle on suunniteltu alue, jolle sula passiivisesti kerätään, pidätetään ja jäähdytetään. Alueelle laaditaan valurautaelementeistä ns.sydänsieppari, joka tulvitetaan vedellä. Sydänsulan tuottama jälkilämpö siirtyyveteen, mistä se poistetaan suojarakennuksen jälkilämmönpoistojärjestelmän kautta. Suuri osa lämmöstä poistuu sydänsulasta sen yläpuolella olevaan veteen, mutta lämmönsiirron tehostamiseksi myös sydänsiepparin alapuolelle on sijoitettu vedellä täytettävät jäähdytyskanavat. Jotta sydänsiepparin toiminta voitaisiin todentaa, on Lappeenrannan Teknillisellä Yliopistolla rakennettu Volley-koelaitteisto tätä tarkoitusta varten. Koelaitteisto koostuu kahdesta täysimittaisesta valuraudasta tehdystä jäähdytyskanavasta. Sydänsulan tuottamaa jälkilämpöä simuloidaan koelaitteistossa sähkövastuksilla. Tässä työssä kuvataan simulaatioiden suorittaminen ja vertaillaan saatuja arvoja mittaustuloksiin. Työ keskittyy sydänsiepparista jäähdytyskanaviin tapahtuvan lämmönsiirron teoriaan jamekanismeihin. Työssä esitetään kolme erilaista korrelaatiota lämmönsiirtokertoimille allaskiehumisen tapauksessa. Nämä korrelaatiot soveltuvat erityisesti tapauksiin, joissa vain muutamia mittausparametreja on tiedossa. Työn toinen osa onVolley 04 -kokeiden simulointi. Ensin käytettyä simulointitapaa on kelpoistettuvertaamalla tuloksia Volley 04 ja 05 -kokeisiin, joissa koetta voitiin jatkaa tasapainotilaan ja joissa jäähdytteen käyttäytyminen jäähdytyskanavassa on tallennettu myös videokameralla. Näiden simulaatioiden tulokset ovat hyvin samanlaisiakuin mittaustulokset. Korkeammilla lämmitystehoilla kokeissa esiintyi vesi-iskuja, jotka rikkoivat videoinnin mahdollistavia ikkunoita. Tämän johdosta osassa Volley 04 -kokeita ikkunat peitettiin metallilevyillä. Joitakin kokeita jouduttiin keskeyttämään laitteiston suurten lämpöjännitysten johdosta. Tällaisten testien simulaatiot eivät ole yksinkertaisia suorittaa. Veden pinnan korkeudesta ei ole visuaalista havaintoa. Myöskään jäähdytteen tasapainotilanlämpötiloista ei ole tarkkaa tietoa, mutta joitakin oletuksia voidaan tehdä samoilla parametreilla tehtyjen Volley 05 -kokeiden perusteella. Mittaustulokset Volley 04 ja 05 -kokeista, jotka on videoitu ja voitu ajaa tasapainotilaan saakka, antoivat simulaatioiden kanssa hyvin samankaltaisia lämpötilojen arvoja. Keskeytettyjen kokeiden ekstrapolointi tasapainotilaan ei onnistunut kovin hyvin. Kokeet jouduttiin keskeyttämään niin paljon ennen termohydraulista tasapainoa, ettei tasapainotilan reunaehtoja voitu ennustaa. Videonauhoituksen puuttuessa ei veden pinnan korkeudesta saatu lisätietoa. Tuloksista voidaan lähinnä esittää arvioita siitä, mitä suuruusluokkaa mittapisteiden lämpötilat tulevat olemaan. Nämä lämpötilat ovat kuitenkin selvästi alle sydänsiepparissa käytettävän valuraudan sulamislämpötilan. Joten simulaatioiden perusteella voidaan sanoa, etteivät jäähdytyskanavien rakenteet sula, mikäli niissä on pienikin jäähdytevirtaus, eikä useampia kuin muutama vierekkäinen kanava ole täysin kuivana.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: The Richalet hypoxia sensitivity test (RT), which quantifies the cardiorespiratory response to acute hypoxia during exercise at an intensity corresponding to a heart rate of ~130 bpm in normoxia, can predict susceptibility of altitude sickness. Its ability to predict exercise performance in hypoxia is unknown. OBJECTIVES: Investigate: (1) whether cerebral blood flow (CBF) and cerebral tissue oxygenation (O2Hb; oxygenated hemoglobin, HHb; deoxygenated hemoglobin) responses during RT predict time-trial cycling (TT) performance in severe hypoxia; (2) if subjects with blunted cardiorespiratory responses during RT show greater impairment of TT performance in severe hypoxia. STUDY DESIGN: Thirteen men [27 ± 7 years (mean ± SD), Wmax: 385 ± 30 W] were evaluated with RT and the results related to two 15 km TT, in normoxia and severe hypoxia (FIO2 = 0.11). RESULTS: During RT, mean middle cerebral artery blood velocity (MCAv: index of CBF) was unaltered with hypoxia at rest (p > 0.05), while it was increased during normoxic (+22 ± 12 %, p < 0.05) and hypoxic exercise (+33 ± 17 %, p < 0.05). Resting hypoxia lowered cerebral O2Hb by 2.2 ± 1.2 μmol (p < 0.05 vs. resting normoxia); hypoxic exercise further lowered it to -7.6 ± 3.1 μmol below baseline (p < 0.05). Cerebral HHb, increased by 3.5 ± 1.8 μmol in resting hypoxia (p < 0.05), and further to 8.5 ± 2.9 μmol in hypoxic exercise (p < 0.05). Changes in CBF and cerebral tissue oxygenation during RT did not correlate with TT performance loss (R = 0.4, p > 0.05 and R = 0.5, p > 0.05, respectively), while tissue oxygenation and SaO2 changes during TT did (R = -0.76, p < 0.05). Significant correlations were observed between SaO2, MCAv and HHb during RT (R = -0.77, -0.76 and 0.84 respectively, p < 0.05 in all cases). CONCLUSIONS: CBF and cerebral tissue oxygenation changes during RT do not predict performance impairment in hypoxia. Since the changes in SaO2 and brain HHb during the TT correlated with performance impairment, the hypothesis that brain oxygenation plays a limiting role for global exercise in conditions of severe hypoxia remains to be tested further.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, we evaluated the repeatability of pupil responses to colored light stimuli in healthy subjects using a prototype chromatic pupillometer. One eye of 10 healthy subjects was tested twice in the same day using monochromatic light exposure at two selected wavelengths (660 and 470 nm, intensity 300 cd/m(2)) presented continuously for 20 s. Pupil responses were recorded in real-time before, during, and after light exposure. Maximal contraction amplitude and sustained contraction amplitude were calculated. In addition, we quantified the summed pupil response during continuous light stimulation as the total area between a reference line representing baseline pupil size and the line representing actual pupil size over 20 s (area under the curve). There was no significant difference in the repeated measure compared to the first test for any of the pupil response parameters. In conclusion, we have developed a novel prototype of color pupillometer which demonstrates good repeatability in evoking and recording the pupillary response to a bright blue and red light stimulus.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Buchheit, M, Al Haddad, H, Millet GP, Lepretre, PM, Newton, M, and Ahmaidi, S. Cardiorespiratory and cardiac autonomic responses to 30-15 Intermittent Fitness Test in team sport players. J Strength Cond Res 23(1): xxx-xxx, 2009-The 30-15 Intermittent Fitness Test (30-15IFT) is an attractive alternative to classic continuous incremental field tests for defining a reference velocity for interval training prescription in team sport athletes. The aim of the present study was to compare cardiorespiratory and autonomic responses to 30-15IFT with those observed during a standard continuous test (CT). In 20 team sport players (20.9 +/- 2.2 years), cardiopulmonary parameters were measured during exercise and for 10 minutes after both tests. Final running velocity, peak lactate ([La]peak), and rating of perceived exertion (RPE) were also measured. Parasympathetic function was assessed during the postexercise recovery phase via heart rate (HR) recovery time constant (HRRtau) and HR variability (HRV) vagal-related indices. At exhaustion, no difference was observed in peak oxygen uptake (&OV0312;o2peak), respiratory exchange ratio, HR, or RPE between 30-15IFT and CT. In contrast, 30-15IFT led to significantly higher minute ventilation, [La]peak, and final velocity than CT (p < 0.05 for all parameters). All maximal cardiorespiratory variables observed during both tests were moderately to well correlated (e.g., r = 0.76, p = 0.001 for &OV0312;o2peak). Regarding ventilatory thresholds (VThs), all cardiorespiratory measurements were similar and well correlated between the 2 tests. Parasympathetic function was lower after 30-15IFT than after CT, as indicated by significantly longer HHRtau (81.9 +/- 18.2 vs. 60.5 +/- 19.5 for 30-15IFT and CT, respectively, p < 0.001) and lower HRV vagal-related indices (i.e., the root mean square of successive R-R intervals differences [rMSSD]: 4.1 +/- 2.4 and 7.0 +/- 4.9 milliseconds, p < 0.05). In conclusion, the 30-15IFT is accurate for assessing VThs and &OV0312;o2peak, but it alters postexercise parasympathetic function more than a continuous incremental protocol.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Un système efficace de sismique tridimensionnelle (3-D) haute-résolution adapté à des cibles lacustres de petite échelle a été développé. Dans le Lac Léman, près de la ville de Lausanne, en Suisse, des investigations récentes en deux dimension (2-D) ont mis en évidence une zone de faille complexe qui a été choisie pour tester notre système. Les structures observées incluent une couche mince (<40 m) de sédiments quaternaires sub-horizontaux, discordants sur des couches tertiaires de molasse pentées vers le sud-est. On observe aussi la zone de faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau de la Molasse Subalpine. Deux campagnes 3-D complètes, d?environ d?un kilomètre carré, ont été réalisées sur ce site de test. La campagne pilote (campagne I), effectuée en 1999 pendant 8 jours, a couvert 80 profils en utilisant une seule flûte. Pendant la campagne II (9 jours en 2001), le nouveau système trois-flûtes, bien paramétrés pour notre objectif, a permis l?acquisition de données de très haute qualité sur 180 lignes CMP. Les améliorations principales incluent un système de navigation et de déclenchement de tirs grâce à un nouveau logiciel. Celui-ci comprend un contrôle qualité de la navigation du bateau en temps réel utilisant un GPS différentiel (dGPS) à bord et une station de référence près du bord du lac. De cette façon, les tirs peuvent être déclenchés tous les 5 mètres avec une erreur maximale non-cumulative de 25 centimètres. Tandis que pour la campagne I la position des récepteurs de la flûte 48-traces a dû être déduite à partir des positions du bateau, pour la campagne II elle ont pu être calculées précisément (erreur <20 cm) grâce aux trois antennes dGPS supplémentaires placées sur des flotteurs attachés à l?extrémité de chaque flûte 24-traces. Il est maintenant possible de déterminer la dérive éventuelle de l?extrémité des flûtes (75 m) causée par des courants latéraux ou de petites variations de trajet du bateau. De plus, la construction de deux bras télescopiques maintenant les trois flûtes à une distance de 7.5 m les uns des autres, qui est la même distance que celle entre les lignes naviguées de la campagne II. En combinaison avec un espacement de récepteurs de 2.5 m, la dimension de chaque «bin» de données 3-D de la campagne II est de 1.25 m en ligne et 3.75 m latéralement. L?espacement plus grand en direction « in-line » par rapport à la direction «cross-line» est justifié par l?orientation structurale de la zone de faille perpendiculaire à la direction «in-line». L?incertitude sur la navigation et le positionnement pendant la campagne I et le «binning» imprécis qui en résulte, se retrouve dans les données sous forme d?une certaine discontinuité des réflecteurs. L?utilisation d?un canon à air à doublechambre (qui permet d?atténuer l?effet bulle) a pu réduire l?aliasing observé dans les sections migrées en 3-D. Celui-ci était dû à la combinaison du contenu relativement haute fréquence (<2000 Hz) du canon à eau (utilisé à 140 bars et à 0.3 m de profondeur) et d?un pas d?échantillonnage latéral insuffisant. Le Mini G.I 15/15 a été utilisé à 80 bars et à 1 m de profondeur, est mieux adapté à la complexité de la cible, une zone faillée ayant des réflecteurs pentés jusqu?à 30°. Bien que ses fréquences ne dépassent pas les 650 Hz, cette source combine une pénétration du signal non-aliasé jusqu?à 300 m dans le sol (par rapport au 145 m pour le canon à eau) pour une résolution verticale maximale de 1.1 m. Tandis que la campagne I a été acquise par groupes de plusieurs lignes de directions alternées, l?optimisation du temps d?acquisition du nouveau système à trois flûtes permet l?acquisition en géométrie parallèle, ce qui est préférable lorsqu?on utilise une configuration asymétrique (une source et un dispositif de récepteurs). Si on ne procède pas ainsi, les stacks sont différents selon la direction. Toutefois, la configuration de flûtes, plus courtes que pour la compagne I, a réduit la couverture nominale, la ramenant de 12 à 6. Une séquence classique de traitement 3-D a été adaptée à l?échantillonnage à haute fréquence et elle a été complétée par deux programmes qui transforment le format non-conventionnel de nos données de navigation en un format standard de l?industrie. Dans l?ordre, le traitement comprend l?incorporation de la géométrie, suivi de l?édition des traces, de l?harmonisation des «bins» (pour compenser l?inhomogénéité de la couverture due à la dérive du bateau et de la flûte), de la correction de la divergence sphérique, du filtrage passe-bande, de l?analyse de vitesse, de la correction DMO en 3-D, du stack et enfin de la migration 3-D en temps. D?analyses de vitesse détaillées ont été effectuées sur les données de couverture 12, une ligne sur deux et tous les 50 CMP, soit un nombre total de 600 spectres de semblance. Selon cette analyse, les vitesses d?intervalles varient de 1450-1650 m/s dans les sédiments non-consolidés et de 1650-3000 m/s dans les sédiments consolidés. Le fait que l?on puisse interpréter plusieurs horizons et surfaces de faille dans le cube, montre le potentiel de cette technique pour une interprétation tectonique et géologique à petite échelle en trois dimensions. On distingue cinq faciès sismiques principaux et leurs géométries 3-D détaillées sur des sections verticales et horizontales: les sédiments lacustres (Holocène), les sédiments glacio-lacustres (Pléistocène), la Molasse du Plateau, la Molasse Subalpine de la zone de faille (chevauchement) et la Molasse Subalpine au sud de cette zone. Les couches de la Molasse du Plateau et de la Molasse Subalpine ont respectivement un pendage de ~8° et ~20°. La zone de faille comprend de nombreuses structures très déformées de pendage d?environ 30°. Des tests préliminaires avec un algorithme de migration 3-D en profondeur avant sommation et à amplitudes préservées démontrent que la qualité excellente des données de la campagne II permet l?application de telles techniques à des campagnes haute-résolution. La méthode de sismique marine 3-D était utilisée jusqu?à présent quasi-exclusivement par l?industrie pétrolière. Son adaptation à une échelle plus petite géographiquement mais aussi financièrement a ouvert la voie d?appliquer cette technique à des objectifs d?environnement et du génie civil.<br/><br/>An efficient high-resolution three-dimensional (3-D) seismic reflection system for small-scale targets in lacustrine settings was developed. In Lake Geneva, near the city of Lausanne, Switzerland, past high-resolution two-dimensional (2-D) investigations revealed a complex fault zone (the Paudèze thrust zone), which was subsequently chosen for testing our system. Observed structures include a thin (<40 m) layer of subhorizontal Quaternary sediments that unconformably overlie southeast-dipping Tertiary Molasse beds and the Paudèze thrust zone, which separates Plateau and Subalpine Molasse units. Two complete 3-D surveys have been conducted over this same test site, covering an area of about 1 km2. In 1999, a pilot survey (Survey I), comprising 80 profiles, was carried out in 8 days with a single-streamer configuration. In 2001, a second survey (Survey II) used a newly developed three-streamer system with optimized design parameters, which provided an exceptionally high-quality data set of 180 common midpoint (CMP) lines in 9 days. The main improvements include a navigation and shot-triggering system with in-house navigation software that automatically fires the gun in combination with real-time control on navigation quality using differential GPS (dGPS) onboard and a reference base near the lake shore. Shots were triggered at 5-m intervals with a maximum non-cumulative error of 25 cm. Whereas the single 48-channel streamer system of Survey I requires extrapolation of receiver positions from the boat position, for Survey II they could be accurately calculated (error <20 cm) with the aid of three additional dGPS antennas mounted on rafts attached to the end of each of the 24- channel streamers. Towed at a distance of 75 m behind the vessel, they allow the determination of feathering due to cross-line currents or small course variations. Furthermore, two retractable booms hold the three streamers at a distance of 7.5 m from each other, which is the same distance as the sail line interval for Survey I. With a receiver spacing of 2.5 m, the bin dimension of the 3-D data of Survey II is 1.25 m in in-line direction and 3.75 m in cross-line direction. The greater cross-line versus in-line spacing is justified by the known structural trend of the fault zone perpendicular to the in-line direction. The data from Survey I showed some reflection discontinuity as a result of insufficiently accurate navigation and positioning and subsequent binning errors. Observed aliasing in the 3-D migration was due to insufficient lateral sampling combined with the relatively high frequency (<2000 Hz) content of the water gun source (operated at 140 bars and 0.3 m depth). These results motivated the use of a double-chamber bubble-canceling air gun for Survey II. A 15 / 15 Mini G.I air gun operated at 80 bars and 1 m depth, proved to be better adapted for imaging the complexly faulted target area, which has reflectors dipping up to 30°. Although its frequencies do not exceed 650 Hz, this air gun combines a penetration of non-aliased signal to depths of 300 m below the water bottom (versus 145 m for the water gun) with a maximum vertical resolution of 1.1 m. While Survey I was shot in patches of alternating directions, the optimized surveying time of the new threestreamer system allowed acquisition in parallel geometry, which is preferable when using an asymmetric configuration (single source and receiver array). Otherwise, resulting stacks are different for the opposite directions. However, the shorter streamer configuration of Survey II reduced the nominal fold from 12 to 6. A 3-D conventional processing flow was adapted to the high sampling rates and was complemented by two computer programs that format the unconventional navigation data to industry standards. Processing included trace editing, geometry assignment, bin harmonization (to compensate for uneven fold due to boat/streamer drift), spherical divergence correction, bandpass filtering, velocity analysis, 3-D DMO correction, stack and 3-D time migration. A detailed semblance velocity analysis was performed on the 12-fold data set for every second in-line and every 50th CMP, i.e. on a total of 600 spectra. According to this velocity analysis, interval velocities range from 1450-1650 m/s for the unconsolidated sediments and from 1650-3000 m/s for the consolidated sediments. Delineation of several horizons and fault surfaces reveal the potential for small-scale geologic and tectonic interpretation in three dimensions. Five major seismic facies and their detailed 3-D geometries can be distinguished in vertical and horizontal sections: lacustrine sediments (Holocene) , glaciolacustrine sediments (Pleistocene), Plateau Molasse, Subalpine Molasse and its thrust fault zone. Dips of beds within Plateau and Subalpine Molasse are ~8° and ~20°, respectively. Within the fault zone, many highly deformed structures with dips around 30° are visible. Preliminary tests with 3-D preserved-amplitude prestack depth migration demonstrate that the excellent data quality of Survey II allows application of such sophisticated techniques even to high-resolution seismic surveys. In general, the adaptation of the 3-D marine seismic reflection method, which to date has almost exclusively been used by the oil exploration industry, to a smaller geographical as well as financial scale has helped pave the way for applying this technique to environmental and engineering purposes.<br/><br/>La sismique réflexion est une méthode d?investigation du sous-sol avec un très grand pouvoir de résolution. Elle consiste à envoyer des vibrations dans le sol et à recueillir les ondes qui se réfléchissent sur les discontinuités géologiques à différentes profondeurs et remontent ensuite à la surface où elles sont enregistrées. Les signaux ainsi recueillis donnent non seulement des informations sur la nature des couches en présence et leur géométrie, mais ils permettent aussi de faire une interprétation géologique du sous-sol. Par exemple, dans le cas de roches sédimentaires, les profils de sismique réflexion permettent de déterminer leur mode de dépôt, leurs éventuelles déformations ou cassures et donc leur histoire tectonique. La sismique réflexion est la méthode principale de l?exploration pétrolière. Pendant longtemps on a réalisé des profils de sismique réflexion le long de profils qui fournissent une image du sous-sol en deux dimensions. Les images ainsi obtenues ne sont que partiellement exactes, puisqu?elles ne tiennent pas compte de l?aspect tridimensionnel des structures géologiques. Depuis quelques dizaines d?années, la sismique en trois dimensions (3-D) a apporté un souffle nouveau à l?étude du sous-sol. Si elle est aujourd?hui parfaitement maîtrisée pour l?imagerie des grandes structures géologiques tant dans le domaine terrestre que le domaine océanique, son adaptation à l?échelle lacustre ou fluviale n?a encore fait l?objet que de rares études. Ce travail de thèse a consisté à développer un système d?acquisition sismique similaire à celui utilisé pour la prospection pétrolière en mer, mais adapté aux lacs. Il est donc de dimension moindre, de mise en oeuvre plus légère et surtout d?une résolution des images finales beaucoup plus élevée. Alors que l?industrie pétrolière se limite souvent à une résolution de l?ordre de la dizaine de mètres, l?instrument qui a été mis au point dans le cadre de ce travail permet de voir des détails de l?ordre du mètre. Le nouveau système repose sur la possibilité d?enregistrer simultanément les réflexions sismiques sur trois câbles sismiques (ou flûtes) de 24 traces chacun. Pour obtenir des données 3-D, il est essentiel de positionner les instruments sur l?eau (source et récepteurs des ondes sismiques) avec une grande précision. Un logiciel a été spécialement développé pour le contrôle de la navigation et le déclenchement des tirs de la source sismique en utilisant des récepteurs GPS différentiel (dGPS) sur le bateau et à l?extrémité de chaque flûte. Ceci permet de positionner les instruments avec une précision de l?ordre de 20 cm. Pour tester notre système, nous avons choisi une zone sur le Lac Léman, près de la ville de Lausanne, où passe la faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau et de la Molasse Subalpine. Deux campagnes de mesures de sismique 3-D y ont été réalisées sur une zone d?environ 1 km2. Les enregistrements sismiques ont ensuite été traités pour les transformer en images interprétables. Nous avons appliqué une séquence de traitement 3-D spécialement adaptée à nos données, notamment en ce qui concerne le positionnement. Après traitement, les données font apparaître différents faciès sismiques principaux correspondant notamment aux sédiments lacustres (Holocène), aux sédiments glacio-lacustres (Pléistocène), à la Molasse du Plateau, à la Molasse Subalpine de la zone de faille et la Molasse Subalpine au sud de cette zone. La géométrie 3-D détaillée des failles est visible sur les sections sismiques verticales et horizontales. L?excellente qualité des données et l?interprétation de plusieurs horizons et surfaces de faille montrent le potentiel de cette technique pour les investigations à petite échelle en trois dimensions ce qui ouvre des voies à son application dans les domaines de l?environnement et du génie civil.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The accumulation of aqueous pollutants is becoming a global problem. The search for suitable methods and/or combinations of water treatment processes is a task that can slow down and stop the process of water pollution. In this work, the method of wet oxidation was considered as an appropriate technique for the elimination of the impurities present in paper mill process waters. It has been shown that, when combined with traditional wastewater treatment processes, wet oxidation offers many advantages. The combination of coagulation and wet oxidation offers a new opportunity for the improvement of the quality of wastewater designated for discharge or recycling. First of all, the utilization of coagulated sludge via wet oxidation provides a conditioning process for the sludge, i.e. dewatering, which is rather difficult to carry out with untreated waste. Secondly, Fe2(SO4)3, which is employed earlier as a coagulant, transforms the conventional wet oxidation process into a catalytic one. The use of coagulation as the post-treatment for wet oxidation can offer the possibility of the brown hue that usually accompanies the partial oxidation to be reduced. As a result, the supernatant is less colored and also contains a rather low amount of Fe ions to beconsidered for recycling inside mills. The thickened part that consists of metal ions is then recycled back to the wet oxidation system. It was also observed that wet oxidation is favorable for the degradation of pitch substances (LWEs) and lignin that are present in the process waters of paper mills. Rather low operating temperatures are needed for wet oxidation in order to destruct LWEs. The oxidation in the alkaline media provides not only the faster elimination of pitch and lignin but also significantly improves the biodegradable characteristics of wastewater that contains lignin and pitch substances. During the course of the kinetic studies, a model, which can predict the enhancements of the biodegradability of wastewater, was elaborated. The model includes lumped concentrations suchas the chemical oxygen demand and biochemical oxygen demand and reflects a generalized reaction network of oxidative transformations. Later developments incorporated a new lump, the immediately available biochemical oxygen demand, which increased the fidelity of the predictions made by the model. Since changes in biodegradability occur simultaneously with the destruction of LWEs, an attempt was made to combine these two facts for modeling purposes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Environmentally harmful consequences of fossil fuel utilisation andthe landfilling of wastes have increased the interest among the energy producers to consider the use of alternative fuels like wood fuels and Refuse-Derived Fuels, RDFs. The fluidised bed technology that allows the flexible use of a variety of different fuels is commonly used at small- and medium-sized power plants ofmunicipalities and industry in Finland. Since there is only one mass-burn plantcurrently in operation in the country and no intention to build new ones, the co-firing of pre-processed wastes in fluidised bed boilers has become the most generally applied waste-to-energy concept in Finland. The recently validated EU Directive on Incineration of Wastes aims to mitigate environmentally harmful pollutants of waste incineration and co-incineration of wastes with conventional fuels. Apart from gaseous flue gas pollutants and dust, the emissions of toxic tracemetals are limited. The implementation of the Directive's restrictions in the Finnish legislation is assumed to limit the co-firing of waste fuels, due to the insufficient reduction of the regulated air pollutants in the existing flue gas cleaning devices. Trace metals emission formation and reduction in the ESP, the condensing wet scrubber, the fabric filter, and the humidification reactor were studied, experimentally, in full- and pilot-scale combustors utilising the bubbling fluidised bed technology, and, theoretically, by means of reactor model calculations. The core of the model is a thermodynamic equilibrium analysis. The experiments were carried out with wood chips, sawdust, and peat, and their refuse-derived fuel, RDF, blends. In all, ten different fuels or fuel blends were tested. Relatively high concentrations of trace metals in RDFs compared to the concentrations of these metals in wood fuels increased the trace metal concentrations in the flue gas after the boiler ten- to hundred-folds, when RDF was co-fired with sawdust in a full-scale BFB boiler. In the case of peat, lesser increase in trace metal concentrations was observed, due to the higher initial trace metal concentrations of peat compared to sawdust. Despite the high removal rate of most of the trace metals in the ESP, the Directive emission limits for trace metals were exceeded in each of the RDF co-firing tests. The dominat trace metals in fluegas after the ESP were Cu, Pb and Mn. In the condensing wet scrubber, the flue gas trace metal emissions were reduced below the Directive emission limits, whenRDF pellet was used as a co-firing fuel together with sawdust and peat. High chlorine content of the RDFs enhanced the mercuric chloride formation and hence the mercury removal in the ESP and scrubber. Mercury emissions were lower than theDirective emission limit for total Hg, 0.05 mg/Nm3, in all full-scale co-firingtests already in the flue gas after the ESP. The pilot-scale experiments with aBFB combustor equipped with a fabric filter revealed that the fabric filter alone is able to reduce the trace metal concentrations, including mercury, in the flue gas during the RDF co-firing approximately to the same level as they are during the wood chip firing. Lower trace metal emissions than the Directive limits were easily reached even with a 40% thermal share of RDF co-firing with sawdust.Enrichment of trace metals in the submicron fly ash particle fraction because of RDF co-firing was not observed in the test runs where sawdust was used as the main fuel. The combustion of RDF pellets with peat caused an enrichment of As, Cd, Co, Pb, Sb, and V in the submicron particle mode. Accumulation and release oftrace metals in the bed material was examined by means of a bed material analysis, mass balance calculations and a reactor model. Lead, zinc and copper were found to have a tendency to be accumulated in the bed material but also to have a tendency to be released from the bed material into the combustion gases, if the combustion conditions were changed. The concentration of the trace metal in the combustion gases of the bubbling fluidised bed boiler was found to be a summary of trace metal fluxes from three main sources. They were (1) the trace metal flux from the burning fuel particle (2) the trace metal flux from the ash in the bed, and (3) the trace metal flux from the active alkali metal layer on the sand (and ash) particles in the bed. The amount of chlorine in the system, the combustion temperature, the fuel ash composition and the saturation state of the bed material in regard to trace metals were discovered to be key factors affecting therelease process. During the co-firing of waste fuels with variable amounts of e.g. ash and chlorine, it is extremely important to consider the possible ongoingaccumulation and/or release of the trace metals in the bed, when determining the flue gas trace metal emissions. If the state of the combustion process in regard to trace metals accumulation and/or release in the bed material is not known,it may happen that emissions from the bed material rather than the combustion of the fuel in question are measured and reported.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Position sensitive particle detectors are needed in high energy physics research. This thesis describes the development of fabrication processes and characterization techniques of silicon microstrip detectors used in the work for searching elementary particles in the European center for nuclear research, CERN. The detectors give an electrical signal along the particles trajectory after a collision in the particle accelerator. The trajectories give information about the nature of the particle in the struggle to reveal the structure of the matter and the universe. Detectors made of semiconductors have a better position resolution than conventional wire chamber detectors. Silicon semiconductor is overwhelmingly used as a detector material because of its cheapness and standard usage in integrated circuit industry. After a short spread sheet analysis of the basic building block of radiation detectors, the pn junction, the operation of a silicon radiation detector is discussed in general. The microstrip detector is then introduced and the detailed structure of a double-sided ac-coupled strip detector revealed. The fabrication aspects of strip detectors are discussedstarting from the process development and general principles ending up to the description of the double-sided ac-coupled strip detector process. Recombination and generation lifetime measurements in radiation detectors are discussed shortly. The results of electrical tests, ie. measuring the leakage currents and bias resistors, are displayed. The beam test setups and the results, the signal to noise ratio and the position accuracy, are then described. It was found out in earlier research that a heavy irradiation changes the properties of radiation detectors dramatically. A scanning electron microscope method was developed to measure the electric potential and field inside irradiated detectorsto see how a high radiation fluence changes them. The method and the most important results are discussed shortly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Current advances in genomics, proteomics and other areas of molecular biology make the identification and reconstruction of novel pathways an emerging area of great interest. One such class of pathways is involved in the biogenesis of Iron-Sulfur Clusters (ISC). Results: Our goal is the development of a new approach based on the use and combination of mathematical, theoretical and computational methods to identify the topology of a target network. In this approach, mathematical models play a central role for the evaluation of the alternative network structures that arise from literature data-mining, phylogenetic profiling, structural methods, and human curation. As a test case, we reconstruct the topology of the reaction and regulatory network for the mitochondrial ISC biogenesis pathway in S. cerevisiae. Predictions regarding how proteins act in ISC biogenesis are validated by comparison with published experimental results. For example, the predicted role of Arh1 and Yah1 and some of the interactions we predict for Grx5 both matches experimental evidence. A putative role for frataxin in directly regulating mitochondrial iron import is discarded from our analysis, which agrees with also published experimental results. Additionally, we propose a number of experiments for testing other predictions and further improve the identification of the network structure. Conclusion: We propose and apply an iterative in silico procedure for predictive reconstruction of the network topology of metabolic pathways. The procedure combines structural bioinformatics tools and mathematical modeling techniques that allow the reconstruction of biochemical networks. Using the Iron Sulfur cluster biogenesis in S. cerevisiae as a test case we indicate how this procedure can be used to analyze and validate the network model against experimental results. Critical evaluation of the obtained results through this procedure allows devising new wet lab experiments to confirm its predictions or provide alternative explanations for further improving the models.