1000 resultados para Development of Executives
Resumo:
NORDIn julkaisu 45 käsittelee ruplan kehitystä. Vuonna 1998 rupla kärsi rajusta rahanarvon alenemisesta, minkä jälkiseuraksena investointi alkoi Venäjällä kasvaa. Uusi devalvoitu ruplan vaihtokurssi toi hintakilpailykykyä paikalliselle teollisuudelle. Tämän lisäksi Venäjän öljyn vientihintojen nousu ja maakaasujakelu ovat edesauttaneet taloudellista kasvua viime aikoina. Tämän noususuhdanteen vallitessa inflaatiopaine on pysynyt korkealla. Hinnannousut ovat olleet korkeampia kuinEU:ssa, Venäjän pääkauppakumppanilla. Kuitenkin, ruplan/euron vaihtokurssit ovat pysyneet nimellisesti melko vakaina tällä vuosikymmenellä. Tämä tarkoittaa, että todellisuudessa rupla vahvistuu euroa vastaan, mikä heikentää Venäjän kansainvälistä kilpailykykyä.
Resumo:
Thanks to the continuous progress made in recent years, medical imaging has become an important tool in the diagnosis of various pathologies. In particular, magnetic resonance imaging (MRI) permits to obtain images with a remarkably high resolution without the use of ionizing radiation and is consequently widely applied for a broad range of conditions in all parts of the body. Contrast agents are used in MRI to improve tissue discrimination. Different categories of contrast agents are clinically available, the most widely used being gadolinium chelates. One can distinguish between extracellular gadolinium chelates such as Gd-DTPA, and hepatobiliary gadolinium chelates such as Gd-BOPTA. The latter are able to enter hepatocytes from where they are partially excreted into the bile to an extent dependent on the contrast agent and animal species. Due to this property, hepatobiliary contrast agents are particularly interesting for the MRI of the liver. Actually, a change in signal intensity can result from a change in transport functions signaling the presence of impaired hepatocytes, e.g. in the case of focal (like cancer) or diffuse (like cirrhosis) liver diseases. Although the excretion mechanism into the bile is well known, the uptake mechanisms of hepatobiliary contrast agents into hepatocytes are still not completely understood and several hypotheses have been proposed. As a good knowledge of these transport mechanisms is required to allow an efficient diagnosis by MRI of the functional state of the liver, more fundamental research is needed and an efficient MRI compatible in vitro model would be an asset. So far, most data concerning these transport mechanisms have been obtained by MRI with in vivo models or by a method of detection other than MRI with cellular or sub-cellular models. Actually, no in vitro model is currently available for the study and quantification of contrast agents by MRI notably because high cellular densities are needed to allow detection, and no metallic devices can be used inside the magnet room, which is incompatible with most tissue or cell cultures that require controlled temperature and oxygenation. The aim of this thesis is thus to develop an MRI compatible in vitro cellular model to study the transport of hepatobiliary contrast agents, in particular Gd-BOPTA, into hepatocytes directly by MRI. A better understanding of this transport and especially of its modification in case of hepatic disorder could permit in a second step to extrapolate this knowledge to humans and to use the kinetics of hepatobiliary contrast agents as a tool for the diagnosis of hepatic diseases.
Resumo:
L'expérience LHCb sera installée sur le futur accélérateur LHC du CERN. LHCb est un spectromètre à un bras consacré aux mesures de précision de la violation CP et à l'étude des désintégrations rares des particules qui contiennent un quark b. Actuellement LHCb se trouve dans la phase finale de recherche et développement et de conception. La construction a déjà commencé pour l'aimant et les calorimètres. Dans le Modèle Standard, la violation CP est causée par une phase complexe dans la matrice 3x3 CKM (Cabibbo-Kobayashi-Maskawa) de mélange des quarks. L'expérience LHCb compte utiliser les mesons B pour tester l'unitarité de cette matrice, en mesurant de diverses manières indépendantes tous les angles et côtés du "triangle d'unitarité". Cela permettra de surdéterminer le modèle et, peut-être, de mettre en évidence des incohérences qui seraient le signal de l'existence d'une physique au-delà du Modèle Standard. La reconstruction du vertex de désintégration des particules est une condition fondamentale pour l'expérience LHCb. La présence d'un vertex secondaire déplacé est une signature de la désintégration de particules avec un quark b. Cette signature est utilisée dans le trigger topologique du LHCb. Le Vertex Locator (VeLo) doit fournir des mesures précises de coordonnées de passage des traces près de la région d'interaction. Ces points sont ensuite utilisés pour reconstruire les trajectoires des particules et l'identification des vertices secondaires et la mesure des temps de vie des hadrons avec quark b. L'électronique du VeLo est une partie essentielle du système d'acquisition de données et doit se conformer aux spécifications de l'électronique de LHCb. La conception des circuits doit maximiser le rapport signal/bruit pour obtenir la meilleure performance de reconstruction des traces dans le détecteur. L'électronique, conçue en parallèle avec le développement du détecteur de silicium, a parcouru plusieurs phases de "prototyping" décrites dans cette thèse.<br/><br/>The LHCb experiment is being built at the future LHC accelerator at CERN. It is a forward single-arm spectrometer dedicated to precision measurements of CP violation and rare decays in the b quark sector. Presently it is finishing its R&D and final design stage. The construction already started for the magnet and calorimeters. In the Standard Model, CP violation arises via the complex phase of the 3 x 3 CKM (Cabibbo-Kobayashi-Maskawa) quark mixing matrix. The LHCb experiment will test the unitarity of this matrix by measuring in several theoretically unrelated ways all angles and sides of the so-called "unitary triangle". This will allow to over-constrain the model and - hopefully - to exhibit inconsistencies which will be a signal of physics beyond the Standard Model. The Vertex reconstruction is a fundamental requirement for the LHCb experiment. Displaced secondary vertices are a distinctive feature of b-hadron decays. This signature is used in the LHCb topology trigger. The Vertex Locator (VeLo) has to provide precise measurements of track coordinates close to the interaction region. These are used to reconstruct production and decay vertices of beauty-hadrons and to provide accurate measurements of their decay lifetimes. The Vertex Locator electronics is an essential part of the data acquisition system and must conform to the overall LHCb electronics specification. The design of the electronics must maximise the signal to noise ratio in order to achieve the best tracking reconstruction performance in the detector. The electronics is being designed in parallel with the silicon detector development and went trough several prototyping phases, which are described in this thesis.
Resumo:
Un système efficace de sismique tridimensionnelle (3-D) haute-résolution adapté à des cibles lacustres de petite échelle a été développé. Dans le Lac Léman, près de la ville de Lausanne, en Suisse, des investigations récentes en deux dimension (2-D) ont mis en évidence une zone de faille complexe qui a été choisie pour tester notre système. Les structures observées incluent une couche mince (<40 m) de sédiments quaternaires sub-horizontaux, discordants sur des couches tertiaires de molasse pentées vers le sud-est. On observe aussi la zone de faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau de la Molasse Subalpine. Deux campagnes 3-D complètes, d?environ d?un kilomètre carré, ont été réalisées sur ce site de test. La campagne pilote (campagne I), effectuée en 1999 pendant 8 jours, a couvert 80 profils en utilisant une seule flûte. Pendant la campagne II (9 jours en 2001), le nouveau système trois-flûtes, bien paramétrés pour notre objectif, a permis l?acquisition de données de très haute qualité sur 180 lignes CMP. Les améliorations principales incluent un système de navigation et de déclenchement de tirs grâce à un nouveau logiciel. Celui-ci comprend un contrôle qualité de la navigation du bateau en temps réel utilisant un GPS différentiel (dGPS) à bord et une station de référence près du bord du lac. De cette façon, les tirs peuvent être déclenchés tous les 5 mètres avec une erreur maximale non-cumulative de 25 centimètres. Tandis que pour la campagne I la position des récepteurs de la flûte 48-traces a dû être déduite à partir des positions du bateau, pour la campagne II elle ont pu être calculées précisément (erreur <20 cm) grâce aux trois antennes dGPS supplémentaires placées sur des flotteurs attachés à l?extrémité de chaque flûte 24-traces. Il est maintenant possible de déterminer la dérive éventuelle de l?extrémité des flûtes (75 m) causée par des courants latéraux ou de petites variations de trajet du bateau. De plus, la construction de deux bras télescopiques maintenant les trois flûtes à une distance de 7.5 m les uns des autres, qui est la même distance que celle entre les lignes naviguées de la campagne II. En combinaison avec un espacement de récepteurs de 2.5 m, la dimension de chaque «bin» de données 3-D de la campagne II est de 1.25 m en ligne et 3.75 m latéralement. L?espacement plus grand en direction « in-line » par rapport à la direction «cross-line» est justifié par l?orientation structurale de la zone de faille perpendiculaire à la direction «in-line». L?incertitude sur la navigation et le positionnement pendant la campagne I et le «binning» imprécis qui en résulte, se retrouve dans les données sous forme d?une certaine discontinuité des réflecteurs. L?utilisation d?un canon à air à doublechambre (qui permet d?atténuer l?effet bulle) a pu réduire l?aliasing observé dans les sections migrées en 3-D. Celui-ci était dû à la combinaison du contenu relativement haute fréquence (<2000 Hz) du canon à eau (utilisé à 140 bars et à 0.3 m de profondeur) et d?un pas d?échantillonnage latéral insuffisant. Le Mini G.I 15/15 a été utilisé à 80 bars et à 1 m de profondeur, est mieux adapté à la complexité de la cible, une zone faillée ayant des réflecteurs pentés jusqu?à 30°. Bien que ses fréquences ne dépassent pas les 650 Hz, cette source combine une pénétration du signal non-aliasé jusqu?à 300 m dans le sol (par rapport au 145 m pour le canon à eau) pour une résolution verticale maximale de 1.1 m. Tandis que la campagne I a été acquise par groupes de plusieurs lignes de directions alternées, l?optimisation du temps d?acquisition du nouveau système à trois flûtes permet l?acquisition en géométrie parallèle, ce qui est préférable lorsqu?on utilise une configuration asymétrique (une source et un dispositif de récepteurs). Si on ne procède pas ainsi, les stacks sont différents selon la direction. Toutefois, la configuration de flûtes, plus courtes que pour la compagne I, a réduit la couverture nominale, la ramenant de 12 à 6. Une séquence classique de traitement 3-D a été adaptée à l?échantillonnage à haute fréquence et elle a été complétée par deux programmes qui transforment le format non-conventionnel de nos données de navigation en un format standard de l?industrie. Dans l?ordre, le traitement comprend l?incorporation de la géométrie, suivi de l?édition des traces, de l?harmonisation des «bins» (pour compenser l?inhomogénéité de la couverture due à la dérive du bateau et de la flûte), de la correction de la divergence sphérique, du filtrage passe-bande, de l?analyse de vitesse, de la correction DMO en 3-D, du stack et enfin de la migration 3-D en temps. D?analyses de vitesse détaillées ont été effectuées sur les données de couverture 12, une ligne sur deux et tous les 50 CMP, soit un nombre total de 600 spectres de semblance. Selon cette analyse, les vitesses d?intervalles varient de 1450-1650 m/s dans les sédiments non-consolidés et de 1650-3000 m/s dans les sédiments consolidés. Le fait que l?on puisse interpréter plusieurs horizons et surfaces de faille dans le cube, montre le potentiel de cette technique pour une interprétation tectonique et géologique à petite échelle en trois dimensions. On distingue cinq faciès sismiques principaux et leurs géométries 3-D détaillées sur des sections verticales et horizontales: les sédiments lacustres (Holocène), les sédiments glacio-lacustres (Pléistocène), la Molasse du Plateau, la Molasse Subalpine de la zone de faille (chevauchement) et la Molasse Subalpine au sud de cette zone. Les couches de la Molasse du Plateau et de la Molasse Subalpine ont respectivement un pendage de ~8° et ~20°. La zone de faille comprend de nombreuses structures très déformées de pendage d?environ 30°. Des tests préliminaires avec un algorithme de migration 3-D en profondeur avant sommation et à amplitudes préservées démontrent que la qualité excellente des données de la campagne II permet l?application de telles techniques à des campagnes haute-résolution. La méthode de sismique marine 3-D était utilisée jusqu?à présent quasi-exclusivement par l?industrie pétrolière. Son adaptation à une échelle plus petite géographiquement mais aussi financièrement a ouvert la voie d?appliquer cette technique à des objectifs d?environnement et du génie civil.<br/><br/>An efficient high-resolution three-dimensional (3-D) seismic reflection system for small-scale targets in lacustrine settings was developed. In Lake Geneva, near the city of Lausanne, Switzerland, past high-resolution two-dimensional (2-D) investigations revealed a complex fault zone (the Paudèze thrust zone), which was subsequently chosen for testing our system. Observed structures include a thin (<40 m) layer of subhorizontal Quaternary sediments that unconformably overlie southeast-dipping Tertiary Molasse beds and the Paudèze thrust zone, which separates Plateau and Subalpine Molasse units. Two complete 3-D surveys have been conducted over this same test site, covering an area of about 1 km2. In 1999, a pilot survey (Survey I), comprising 80 profiles, was carried out in 8 days with a single-streamer configuration. In 2001, a second survey (Survey II) used a newly developed three-streamer system with optimized design parameters, which provided an exceptionally high-quality data set of 180 common midpoint (CMP) lines in 9 days. The main improvements include a navigation and shot-triggering system with in-house navigation software that automatically fires the gun in combination with real-time control on navigation quality using differential GPS (dGPS) onboard and a reference base near the lake shore. Shots were triggered at 5-m intervals with a maximum non-cumulative error of 25 cm. Whereas the single 48-channel streamer system of Survey I requires extrapolation of receiver positions from the boat position, for Survey II they could be accurately calculated (error <20 cm) with the aid of three additional dGPS antennas mounted on rafts attached to the end of each of the 24- channel streamers. Towed at a distance of 75 m behind the vessel, they allow the determination of feathering due to cross-line currents or small course variations. Furthermore, two retractable booms hold the three streamers at a distance of 7.5 m from each other, which is the same distance as the sail line interval for Survey I. With a receiver spacing of 2.5 m, the bin dimension of the 3-D data of Survey II is 1.25 m in in-line direction and 3.75 m in cross-line direction. The greater cross-line versus in-line spacing is justified by the known structural trend of the fault zone perpendicular to the in-line direction. The data from Survey I showed some reflection discontinuity as a result of insufficiently accurate navigation and positioning and subsequent binning errors. Observed aliasing in the 3-D migration was due to insufficient lateral sampling combined with the relatively high frequency (<2000 Hz) content of the water gun source (operated at 140 bars and 0.3 m depth). These results motivated the use of a double-chamber bubble-canceling air gun for Survey II. A 15 / 15 Mini G.I air gun operated at 80 bars and 1 m depth, proved to be better adapted for imaging the complexly faulted target area, which has reflectors dipping up to 30°. Although its frequencies do not exceed 650 Hz, this air gun combines a penetration of non-aliased signal to depths of 300 m below the water bottom (versus 145 m for the water gun) with a maximum vertical resolution of 1.1 m. While Survey I was shot in patches of alternating directions, the optimized surveying time of the new threestreamer system allowed acquisition in parallel geometry, which is preferable when using an asymmetric configuration (single source and receiver array). Otherwise, resulting stacks are different for the opposite directions. However, the shorter streamer configuration of Survey II reduced the nominal fold from 12 to 6. A 3-D conventional processing flow was adapted to the high sampling rates and was complemented by two computer programs that format the unconventional navigation data to industry standards. Processing included trace editing, geometry assignment, bin harmonization (to compensate for uneven fold due to boat/streamer drift), spherical divergence correction, bandpass filtering, velocity analysis, 3-D DMO correction, stack and 3-D time migration. A detailed semblance velocity analysis was performed on the 12-fold data set for every second in-line and every 50th CMP, i.e. on a total of 600 spectra. According to this velocity analysis, interval velocities range from 1450-1650 m/s for the unconsolidated sediments and from 1650-3000 m/s for the consolidated sediments. Delineation of several horizons and fault surfaces reveal the potential for small-scale geologic and tectonic interpretation in three dimensions. Five major seismic facies and their detailed 3-D geometries can be distinguished in vertical and horizontal sections: lacustrine sediments (Holocene) , glaciolacustrine sediments (Pleistocene), Plateau Molasse, Subalpine Molasse and its thrust fault zone. Dips of beds within Plateau and Subalpine Molasse are ~8° and ~20°, respectively. Within the fault zone, many highly deformed structures with dips around 30° are visible. Preliminary tests with 3-D preserved-amplitude prestack depth migration demonstrate that the excellent data quality of Survey II allows application of such sophisticated techniques even to high-resolution seismic surveys. In general, the adaptation of the 3-D marine seismic reflection method, which to date has almost exclusively been used by the oil exploration industry, to a smaller geographical as well as financial scale has helped pave the way for applying this technique to environmental and engineering purposes.<br/><br/>La sismique réflexion est une méthode d?investigation du sous-sol avec un très grand pouvoir de résolution. Elle consiste à envoyer des vibrations dans le sol et à recueillir les ondes qui se réfléchissent sur les discontinuités géologiques à différentes profondeurs et remontent ensuite à la surface où elles sont enregistrées. Les signaux ainsi recueillis donnent non seulement des informations sur la nature des couches en présence et leur géométrie, mais ils permettent aussi de faire une interprétation géologique du sous-sol. Par exemple, dans le cas de roches sédimentaires, les profils de sismique réflexion permettent de déterminer leur mode de dépôt, leurs éventuelles déformations ou cassures et donc leur histoire tectonique. La sismique réflexion est la méthode principale de l?exploration pétrolière. Pendant longtemps on a réalisé des profils de sismique réflexion le long de profils qui fournissent une image du sous-sol en deux dimensions. Les images ainsi obtenues ne sont que partiellement exactes, puisqu?elles ne tiennent pas compte de l?aspect tridimensionnel des structures géologiques. Depuis quelques dizaines d?années, la sismique en trois dimensions (3-D) a apporté un souffle nouveau à l?étude du sous-sol. Si elle est aujourd?hui parfaitement maîtrisée pour l?imagerie des grandes structures géologiques tant dans le domaine terrestre que le domaine océanique, son adaptation à l?échelle lacustre ou fluviale n?a encore fait l?objet que de rares études. Ce travail de thèse a consisté à développer un système d?acquisition sismique similaire à celui utilisé pour la prospection pétrolière en mer, mais adapté aux lacs. Il est donc de dimension moindre, de mise en oeuvre plus légère et surtout d?une résolution des images finales beaucoup plus élevée. Alors que l?industrie pétrolière se limite souvent à une résolution de l?ordre de la dizaine de mètres, l?instrument qui a été mis au point dans le cadre de ce travail permet de voir des détails de l?ordre du mètre. Le nouveau système repose sur la possibilité d?enregistrer simultanément les réflexions sismiques sur trois câbles sismiques (ou flûtes) de 24 traces chacun. Pour obtenir des données 3-D, il est essentiel de positionner les instruments sur l?eau (source et récepteurs des ondes sismiques) avec une grande précision. Un logiciel a été spécialement développé pour le contrôle de la navigation et le déclenchement des tirs de la source sismique en utilisant des récepteurs GPS différentiel (dGPS) sur le bateau et à l?extrémité de chaque flûte. Ceci permet de positionner les instruments avec une précision de l?ordre de 20 cm. Pour tester notre système, nous avons choisi une zone sur le Lac Léman, près de la ville de Lausanne, où passe la faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau et de la Molasse Subalpine. Deux campagnes de mesures de sismique 3-D y ont été réalisées sur une zone d?environ 1 km2. Les enregistrements sismiques ont ensuite été traités pour les transformer en images interprétables. Nous avons appliqué une séquence de traitement 3-D spécialement adaptée à nos données, notamment en ce qui concerne le positionnement. Après traitement, les données font apparaître différents faciès sismiques principaux correspondant notamment aux sédiments lacustres (Holocène), aux sédiments glacio-lacustres (Pléistocène), à la Molasse du Plateau, à la Molasse Subalpine de la zone de faille et la Molasse Subalpine au sud de cette zone. La géométrie 3-D détaillée des failles est visible sur les sections sismiques verticales et horizontales. L?excellente qualité des données et l?interprétation de plusieurs horizons et surfaces de faille montrent le potentiel de cette technique pour les investigations à petite échelle en trois dimensions ce qui ouvre des voies à son application dans les domaines de l?environnement et du génie civil.
Resumo:
Résumé : Cette thèse de doctorat est le fruit d'un projet de recherche européen financé par le quatrième programme cadre de la Commission Européenne (DG XII, Standards, Measurement and Testing). Ce projet, dénommé SMT-CT98-2277, a été financé pour la partie suisse par l'Office Fédéral de l'Education et de la Science (OFES, Berne, Suisse). Le but de ce projet était de développer une méthode harmonisée et collaborativement testée pour le profilage des impuretés de l'amphétamine illicite par chromatographie capillaire en phase gazeuse. Le travail a été divisé en sept phases majeures qui concernaient la synthèse de l'amphétamine, l'identification d'impuretés, l'optimisation de la préparation de l'échantillon et du système chromatographique, la variabilité des résultats, l'investigation de méthodes mathématiques pour la classification et la comparaison de profils et finalement l'application de la méthode à des réels échantillons illicites. La méthode résultant de ce travail n'a pas seulement montré que les données étaient interchangeables entre laboratoires mais aussi qu'elle était supérieure en de nombreux points aux méthodes préalablement publiées dans la littérature scientifique. Abstract : This Ph.D. thesis was carried out in parallel to an European project funded by the fourth framework program of the European Commission (DG XII, Standards, Measurement and Testing). This project, named SMT-CT98-2277 was funded, for the Swiss part, by the Federal Office of Education and Science (OFES, Bern, Switzerland). The aim of the project was to develop a harmonised, collaboratively tested method for the impurity profiling of illicit amphetamine by capillary gas chromatography. The work was divided into seven main tasks which deal with the synthesis of amphetamine, identification of impurities, optimization of sample preparation and of the chromatographic system, variability of the results, investigation of numerical methods for the classification and comparison of profiles and finally application of the methodology to real illicit samples. The resulting method has not only shown to produce interchangeable data between different laboratories but was also found to be superior in many aspects to previously published methods.
Resumo:
Woven monofilament, multifilament, and spun yarn filter media have long been the standard media in liquid filtration equipment. While the energy for a solid-liquid separation process is determined by the engineering work, it is the interface between the slurry and the equipment - the filter media - that greatly affects the performance characteristics of the unit operation. Those skilled in the art are well aware that a poorly designed filter medium may endanger the whole operation, whereas well-performing filter media can make the operation smooth and economical. As the mineral and pulp producers seek to produce ever finer and more refined fractions of their products, it is becoming increasingly important to be able to dewater slurries with average particle sizes around 1 ¿m using conventional, high-capacity filtration equipment. Furthermore, the surface properties of the media must not allow sticky and adhesive particles to adhere to the media. The aim of this thesis was to test how the dirt-repellency, electrical resistance and highpressure filtration performance of selected woven filter media can be improved by modifying the fabric or yarn with coating, chemical treatment and calendering. The results achieved by chemical surface treatments clearly show that the woven media surface properties can be modified to achieve lower electrical resistance and improved dirt-repellency. The main challenge with the chemical treatments is the abrasion resistance and, while the experimental results indicate that the treatment is sufficiently permanent to resist standard weathering conditions, they may still prove to be inadequately strong in terms of actual use.From the pressure filtration studies in this work, it seems obvious that the conventional woven multifilament fabrics still perform surprisingly well against the coated media in terms of filtrate clarity and cake build-up. Especially in cases where the feed slurry concentration was low and the pressures moderate, the conventional media seemed to outperform the coated media. In the cases where thefeed slurry concentration was high, the tightly woven media performed well against the monofilament reference fabrics, but seemed to do worse than some of the coated media. This result is somewhat surprising in that the high initial specific resistance of the coated media would suggest that the media will blind more easily than the plain woven media. The results indicate, however, that it is actually the woven media that gradually clogs during the coarse of filtration. In conclusion, it seems obvious that there is a pressure limit above which the woven media looses its capacity to keep the solid particles from penetrating the structure. This finding suggests that for extreme pressures the only foreseeable solution is the coated fabrics supported by a strong enough woven fabric to hold thestructure together. Having said that, the high pressure filtration process seems to follow somewhat different laws than the more conventional processes. Based on the results, it may well be that the role of the cloth is most of all to support the cake, and the main performance-determining factor is a long life time. Measuring the pore size distribution with a commercially available porometer gives a fairly accurate picture of the pore size distribution of a fabric, but failsto give insight into which of the pore sizes is the most important in determining the flow through the fabric. Historically air, and sometimes water, permeability measures have been the standard in evaluating media filtration performance including particle retention. Permeability, however, is a function of a multitudeof variables and does not directly allow the estimation of the effective pore size. In this study a new method for estimating the effective pore size and open pore area in a densely woven multifilament fabric was developed. The method combines a simplified equation of the electrical resistance of fabric with the Hagen-Poiseuille flow equation to estimate the effective pore size of a fabric and the total open area of pores. The results are validated by comparison to the measured values of the largest pore size (Bubble point) and the average pore size. The results show good correlation with measured values. However, the measured and estimated values tend to diverge in high weft density fabrics. This phenomenon is thought to be a result of a more tortuous flow path of denser fabrics, and could most probably be cured by using another value for the tortuosity factor.
Resumo:
This research was undertaken to study the influence of different concentrations of the MT medium, sucrose, vitamins, activated charcoal and gibberellic acid (GA3) on the culture of immature embryos from the crossing between 'Pêra Rio' sweet orange and 'PONCÃ' mandarin. The embryos were excised under aseptic conditions and inoculated in 15 mL of the MT medium according to the following experiments: 1) MT concentrations (0%, 50%, 100%, 150% and 200%) supplemented with 0, 30, 60 and 90 g.L-1 of sucrose; 2) vitamins concentrations of the MT (0%, 50%, 100%, 150% and 200%) supplemented with 0, 30, 60 and 90 g.L-1 of sucrose; 3) activated charcoal concentrations (0, 0.5, 1, 1.5 and 2 g.L-1) supplemented with GA3 (0, 0.01, 0.1; 1 and 10 mg.L-1). After the inoculation, the embryos were kept in a growth room for 90 days at 27 ± 1ºC, in a 16-hour photoperiod with 32 µmol.m-2.s-1 of irradiance. The best development of embryos at the globular stage was achieved using 50% and 100% of the MT medium plus 60 g.L-1 and 90 g.L-1 of sucrose, respectively, supplemented with 0.01 mg.L-1 of GA3. The addition of activated charcoal or vitamins in the MT medium has shown to be unnecessary to the development of globular embryos.
Resumo:
The fiber recovery process is an essential part of the modern paper mill. It creates the basisfor mill's internal recirculation of the most important raw materials ¿ water and fiber. It is normally also a start point for further treatment of wastewater and if it works efficiently, it offers excellent basis to minimize effluents. This dissertation offers two different approaches to the subject. Firstly a novel save-all disc filter feeding system is developed and presented. This so-called precoat method is tested both in the laboratory and full-scale conditions. In laboratory scale it beats the traditional one clearly, when low freeness pulps are used as a sweetener stock. The full-scale application needs still some development work before it can be implemented to the paper mills. Secondly, the operationenvironment of save-all disc filter is studied mostly in laboratory conditions.The focus of this study is in cases, where low-freeness pulps are used as a sweetener stock of save-all filter. The effects of CSF-value, pressure drop, suspension consistency and retention chemicals to the quantity and quality of the filtrate was studied. Also the filtration resistance of the low freeness pulps was one studied.
Resumo:
In this thesis the membrane filtration equipment for plate type ceramic membranes was developed based on filtration results achieved with different kinds of wastewaters. The experiments were mainly made with pulp and board mill wastewaters, but some experiments were also made with a bore well water and a stone cutting mine wastewater. The ceramicmembranes used were alpha-alumina membranes with a pore size of 100 nm. Some ofthe membranes were coated with a gamma-alumina layer to reduce the membrane pore size to 10 nm, and some of them were modified with different metal oxides in order to change the surface properties of the membranes. The effects of operationparameters, such as cross-flow velocity, filtration pressure and backflushing on filtration performance were studied. The measured parameters were the permeateflux, the quality of the permeate, as well as the fouling tendency of the membrane. A dynamic membrane or a cake layer forming on top of the membrane was observed to decrease the flux and increase separa-tion of certain substances, especially at low cross-flow velocities. When the cross-flow velocities were increased the membrane properties became more important. Backflushing could also be used to decrease the thickness of the cake layer and thus it improved the permeate flux. However, backflushing can lead to a reduction of retentions in cases where the cake layer is improving them. The wastewater quality was important for the thickness of the dynamic membrane and the membrane pore size influenced the permeate flux. In general, the optimization of operation conditions is very important for the successful operation of a membrane filtration system. The filtration equipment with a reasonable range of operational conditions is necessary, especiallywhen different kinds of wastewaters are treated. This should be taken into account already in the development stage of a filtration equipment.
Resumo:
The kernel of the cutia nut (castanha-de-cutia, Couepia edulis (Prance) Prance) of the western Amazon, which is consumed by the local population, has traditionally been extracted from the nut with a machete, a dangerous procedure that only produces kernels cut in half. A shelling off machine prototype, which produces whole kernels without serious risks to its operator, is described and tested. The machine makes a circular cut in the central part of the fruit shell, perpendicular to its main axis. Three ways of conditioning the fruits before cutting were compared: (1) control; (2) oven drying immediately prior to cutting; (3) oven drying, followed by a 24-hour interval before cutting. The time needed to extract and separate the kernel from the endocarp and testa was measured. Treatment 3 produced the highest output: 63 kernels per hour, the highest percentage of whole kernels (90%), and the best kernel taste. Kernel extraction with treatment 3 required 50% less time than treatment 1, while treatment 2 needed 38% less time than treatment 1. The proportion of kernels attached to the testa was 93%, 47%, and 8% for treatments 1, 2, and 3, respectively, and was the main reason for extraction time differences.
Resumo:
Postmortem MRI (PMMR) examinations are seldom performed in legal medicine due to long examination times, unfamiliarity with the technique, and high costs. Furthermore, it is difficult to obtain access to an MRI device used for patients in clinical settings to image an entire human body. An alternative is available: ex situ organ examination. To our knowledge, there is no standardized protocol that includes ex situ organ preparation and scanning parameters for postmortem MRI. Thus, our objective was to develop a standard procedure for ex situ heart PMMR examinations. We also tested the oily contrast agent Angiofil® commonly used for PMCT angiography, for its applicability in MRI. We worked with a 3 Tesla MRI device and 32-channel head coils. Twelve porcine hearts were used to test different materials to find the best way to prepare and place organs in the device and to test scanning parameters. For coronary MR angiography, we tested different mixtures of Angiofil® and different injection materials. In a second step, 17 human hearts were examined to test the procedure and its applicability to human organs. We established two standardized protocols: one for preparation of the heart and another for scanning parameters based on experience in clinical practice. The established protocols enabled a standardized technical procedure with comparable radiological images, allowing for easy radiological reading. The performance of coronary MR angiography enabled detailed coronary assessment and revealed the utility of Angiofil® as a contrast agent for PMMR. Our simple, reproducible method for performing heart examinations ex situ yields high quality images and visualization of the coronary arteries.