954 resultados para Test Case Generator
Resumo:
Leprosy is a contagious and chronic systemic granulomatous disease caused by Mycobacterium leprae (Hansen"s bacillus). It is transmitted from person to person and has a long incubation period (between two and six years). The disease presents polar clinical forms (the"multibacillary" lepromatous leprosy and the"paucibacillary" tuberculoid leprosy), as well as other intermediate forms with hybrid characteristics. Oral manifestations usually appear in lepromatous leprosy and occur in 20-60% of cases. They may take the form of multiple nodules (lepromas) that progress to necrosis and ulceration. The ulcers are slow to heal, and produce atrophic scarring or even tissue destruction. The lesions are usually located on the hard and soft palate, in the uvula, on the underside of the tongue, and on the lips and gums. There may also be destruction of the anterior maxilla and loss of teeth. The diagnosis, based on clinical suspicion, is confirmed through bacteriological and histopathological analyses, as well as by means of the lepromin test (intradermal reaction that is usually negative in lepromatous leprosy form and positive in the tuberculoid form). The differential diagnosis includes systemic lupus erythematosus, sarcoidosis, cutaneous leishmaniasis and other skin diseases, tertiary syphilis, lymphomas, systemic mycosis, traumatic lesions and malignant neoplasias, among other disorders. Treatment is difficult as it must be continued for long periods, requires several drugs with adverse effects and proves very expensive, particularly for less developed countries. The most commonly used drugs are dapsone, rifampicin and clofazimine. Quinolones, such as ofloxacin and pefloxacin, as well as some macrolides, such as clarithromycin and minocyclin, are also effective. The present case report describes a patient with lepromatous leprosy acquired within a contagious family setting during childhood and adolescence
Resumo:
This abstract presents how we redesigned, with user-centred design methods, the way we organize and present the content on the UOC Virtual Library website. The content is now offered in a way that is more intuitive, usable and easy to understand, based on criteria of customization, transparency and proximity.The techniques used to achieve these objectives included benchmarking, interviews and focus groups during the user requirement capture phase and user tests to assess the process and results.
Resumo:
The size-advantage model (SAM) explains the temporal variation of energetic investment on reproductive structures (i.e. male and female gametes and reproductive organs) in long-lived hermaphroditic plants and animals. It proposes that an increase in the resources available to an organism induces a higher relative investment on the most energetically costly sexual structures. In plants, pollination interactions are known to play an important role in the evolution of floral features. Because the SAM directly concerns flower characters, pollinators are expected to have a strong influence on the application of the model. This hypothesis, however, has never been tested. Here, we investigate whether the identity and diversity of pollinators can be used as a proxy to predict the application of the SAM in exclusive zoophilous plants. We present a new approach to unravel the dynamics of the model and test it on several widespread Arum (Araceae) species. By identifying the species composition, abundance and spatial variation of arthropods trapped in inflorescences, we show that some species (i.e. A. cylindraceum and A. italicum) display a generalist reproductive strategy, relying on the exploitation of a low number of dipterans, in contrast to the pattern seen in the specialist A. maculatum (pollinated specifically by two fly species only). Based on the model presented here, the application of the SAM is predicted for the first two and not expected in the latter species, those predictions being further confirmed by allometric measures. We here demonstrate that while an increase in the female zone occurs in larger inflorescences of generalist species, this does not happen in species demonstrating specific pollinators. This is the first time that this theory is both proposed and empirically tested in zoophilous plants. Its overall biological importance is discussed through its application in other non-Arum systems.
Resumo:
Langattoman laajakaistaisen tietoliikennetekniikan kehittyminen on herättänyt kiinnostuksen sen ammattimaiseen hyödyntämiseen yleisen turvallisuuden ja kriisinhallinnan tarpeisiin. Hätätilanteissa usein olemassa olevat kiinteät tietoliikennejärjestelmät eivät ole ollenkaan käytettävissä tai niiden tarjoama kapasiteetti ei ole riittävä. Tästä syystä on noussut esiin tarve nopeasti toimintakuntoon saatettaville ja itsenäisille langattomille laajakaistaisille järjestelmille. Tässä diplomityössä on tarkoitus tutkia langattomia ad hoc monihyppy -verkkoja yleisen turvallisuuden tarpeiden pohjalta ja toteuttaa testialusta, jolla voidaan demonstroida sekä tutkia tällaisen järjestelmän toimintaa käytännössä. Työssä tutkitaan pisteestä pisteeseen sekä erityisesti pisteestä moneen pisteeseen suoritettavaa tietoliikennettä. Mittausten kohteena on testialustan tiedonsiirtonopeus, lähetysteho ja vastaanottimen herkkyys. Näitä tuloksia käytetään simulaattorin parametreina, jotta simulaattorin tulokset olisivat mahdollisimman aidot ja yhdenmukaiset testialustan kanssa. Sen jälkeen valitaan valikoima yleisen turvallisuuden vaatimusten mukaisia ohjelmia ja sovellusmalleja, joiden suorituskyky mitataan erilaisten reititysmenetelmien alaisena sekä testialustalla että simulaattorilla. Tuloksia arvioidaan ja vertaillaan. Multicast monihyppy -video päätettiin sovelluksista valita tutkimusten pääkohteeksi ja sitä sekä sen ominaisuuksia on tarkoitus myös oikeissa kenttäkokeissa.
Resumo:
There is currently a considerable diversity of quantitative measures available for summarizing the results in single-case studies. Given that the interpretation of some of them is difficult due to the lack of established benchmarks, the current paper proposes an approach for obtaining further numerical evidence on the importance of the results, complementing the substantive criteria, visual analysis, and primary summary measures. This additional evidence consists of obtaining the statistical significance of the outcome when referred to the corresponding sampling distribution. This sampling distribution is formed by the values of the outcomes (expressed as data nonoverlap, R-squared, etc.) in case the intervention is ineffective. The approach proposed here is intended to offer the outcome"s probability of being as extreme when there is no treatment effect without the need for some assumptions that cannot be checked with guarantees. Following this approach, researchers would compare their outcomes to reference values rather than constructing the sampling distributions themselves. The integration of single-case studies is problematic, when different metrics are used across primary studies and not all raw data are available. Via the approach for assigning p values it is possible to combine the results of similar studies regardless of the primary effect size indicator. The alternatives for combining probabilities are discussed in the context of single-case studies pointing out two potentially useful methods one based on a weighted average and the other on the binomial test.
Resumo:
Nowadays the used fuel variety in power boilers is widening and new boiler constructions and running models have to be developed. This research and development is done in small pilot plants where more faster analyse about the boiler mass and heat balance is needed to be able to find and do the right decisions already during the test run. The barrier on determining boiler balance during test runs is the long process of chemical analyses of collected input and outputmatter samples. The present work is concentrating on finding a way to determinethe boiler balance without chemical analyses and optimise the test rig to get the best possible accuracy for heat and mass balance of the boiler. The purpose of this work was to create an automatic boiler balance calculation method for 4 MW CFB/BFB pilot boiler of Kvaerner Pulping Oy located in Messukylä in Tampere. The calculation was created in the data management computer of pilot plants automation system. The calculation is made in Microsoft Excel environment, which gives a good base and functions for handling large databases and calculations without any delicate programming. The automation system in pilot plant was reconstructed und updated by Metso Automation Oy during year 2001 and the new system MetsoDNA has good data management properties, which is necessary for big calculations as boiler balance calculation. Two possible methods for calculating boiler balance during test run were found. Either the fuel flow is determined, which is usedto calculate the boiler's mass balance, or the unburned carbon loss is estimated and the mass balance of the boiler is calculated on the basis of boiler's heat balance. Both of the methods have their own weaknesses, so they were constructed parallel in the calculation and the decision of the used method was left to user. User also needs to define the used fuels and some solid mass flowsthat aren't measured automatically by the automation system. With sensitivity analysis was found that the most essential values for accurate boiler balance determination are flue gas oxygen content, the boiler's measured heat output and lower heating value of the fuel. The theoretical part of this work concentrates in the error management of these measurements and analyses and on measurement accuracy and boiler balance calculation in theory. The empirical part of this work concentrates on the creation of the balance calculation for the boiler in issue and on describing the work environment.
Resumo:
Työn tavoitteena on kehittää arvoinnovaatio yritysverkostolle. Arvoinnovaation teoreettisen viitekehyksen konkretisoimiseksi ja arvioimiseksi käytetään työssä kohteena ProPortfolio - ohjelmistotuotetta sekä sen kehittämistyössä saatuja kokemuksia. ProPortfolio on vuosina 2001-2004 kehitetty mobiiliin teknologiaan perustuva ohjelmistosovellus. ProPortfolio ohjelmisto on liikkeenjohdon konsulttiyritys Tuloskunto Oy:n johdolla ja TEKES:n rahoittamassa hankkeessa kehitetty pientaloprojektien toteuttajaverkoston toiminnanohjausjärjestelmä. Pitkäaikaisen kilpailuedun saavuttaminen erottumalla kilpailijoiden massasta on kasvava haaste yrityksille. Tutkijoiden Kimin ja Mauborgnen kehittämän arvoinnovaation ja strategisen suunnittelun prosessin lähtökohtana on kilpailun kiertäminen differoitumalla kilpailijoista ja uusien markkinoiden luominen. Perinteisen strategisen ajattelutavan mukaan yritykset voivat tuottaa korkeampaa arvoaasiakkaille korkeammin kustannuksin tai kohtuullista arvoa pienemmin kustannuksin. Eli strateginen asemointi perustuu valintaan differoitumisen ja kustannusjohtajuuden välillä (Porter,1985). Arvoinnovaatioon perustuvassa asemoinnissa tavoitellaan sekä differoitumista että kustannusjohtajuutta samanaikaisesti. Arvoinnovaation keskeisimmät kriteerit täyttyivät kehityshankkeen lopputuloksena syntyneessä ProPortfolio -ohjelmistotuotteessa. Tämän työn pohjalta voidaan todeta, että Kimin ja Mauborgnen kehittämä strategian suunnitteluprosessi on laaja ja vaativa. Arvoinnovaatio syntyy harvoin hetkellisenä oivalluksena, vaan kehittyy järjestelmällisen työskentelyn tuloksena. ProPortfolio -ohjelmistotuotteen arvoinnovaation muodostuminen perustui kehityshankkeen onnistuneeseen asiakastarpeiden tunnistamiseen ja ennakoimiseen. Selkeän kuvan hahmottuminen ratkaistavista ongelmista auttoi asemoimaan hankkeen jatkokehitystyön oikein ja loi edellytykset arvoinnovaation muodostumiselle. Tutkijat Kim jaMauborgne ovat haastaneet perinteiset strategiatyöskentelyn mallit. Tutkimustuloksiinsa pohjautuen he ovat kehittäneet uuden arvoinnovaatioon perustuvan strategisen suunnittelun teoreettisen viitekehikon ja prosessin. Nämä mallit tulevat varmasti jättämään pysyvän jäljen strategiatyöskentelyn nykykäytäntöihin.
Resumo:
Introduction: We report a case of digoxin intoxication with severe visual symptoms. Patients (or Materials) and Methods: Digoxin 0.25 mg QD for atrial fibrillation was prescribed to a 91-year-old woman with an estimated creatinine clearance of 18 mL/min. Within 2 to 3 weeks, she developed nausea, vomiting, and dysphagia, and began complaining of snowy and blurry vision, photopsia, dyschromatopsia, aggravated bedtime visual and proprioceptive illusions (she felt as being on a boat), and colored hallucinations. She consulted her family doctor twice and visited the eye clinic once until, 1 month after starting digoxin, impaired autonomy led her to be admitted to the emergency department. Results: Digoxin intoxication was confirmed by a high plasma level measured on admission (5.7 μg/L; reference range, 0.8-2 μg/L). After stopping digoxin, general symptoms resolved in a few days, but visual symptoms persisted. Ophtalmologic care and follow-up diagnosed digoxin intoxication superimposed on pre-existing left eye (LE) cataract, dry age-related macular degeneration (DMLA), and Charles Bonnet syndrome. Visual acuity was 0.4 (right eye, RE) and 0.5 (LE). Ocular fundus was physiologic except for bilateral dry DMLA. Dyschromatopsia was confirmed by poor results on Ishihara test (1/13 OU). Computerized visual field results revealed nonspecific diffuse alterations. Full-field electroretinogram (ERG) showed moderate diffuse rod and cone dysfunction. Visual symptoms progressively improved over the next 2 months, but ERG did not. Complete resolution was not expected due to the pre-existing eye disease. The patient was finally discharged home after a 5-week hospital stay. Conclusion: Digoxin intoxication can go unrecognized by clinicians, even in a typical presentation. The range of potential visual symptoms is far greater than isolated xanthopsia (yellow vision) classically described in textbooks. Newly introduced drugs and all symptoms must be actively sought after, because they significantly affect quality of life and global functioning, especially in the elderly population, most liable not to mention them.
Resumo:
This abstract presents how we redesigned, with user-centred design methods, the way we organize and present the content on the UOC Virtual Library website. The content is now offered in a way that is more intuitive, usable and easy to understand, based on criteria of customization, transparency and proximity.The techniques used to achieve these objectives included benchmarking, interviews and focus groups during the user requirement capture phase and user tests to assess the process and results.
Resumo:
This thesis attempts to find whether scenario planning supports the organizational strategy as a method for addressing uncertainty. The main issues are why, what and how scenario planning fits in organizational strategy and how the process could be supported to make it more effective. The study follows the constructive approach. It starts with examination of competitive advantage and the way that an organization develops strategy and how it addresses the uncertainty in its operational environment. Based on the conducted literature review, scenario methods would seem to provide versatile platform for addressing future uncertainties. The construction is formed by examining the scenario methods and presenting suitable support methods, which results in forming of the theoretical proposition for supporter scenario process. The theoretical framework is tested in laboratory conditions, and the results from the test sessions are used a basis for scenario stories. The process of forming the scenarios and the results are illustrated and presented for scrutiny
Resumo:
La biologie de la conservation est communément associée à la protection de petites populations menacées d?extinction. Pourtant, il peut également être nécessaire de soumettre à gestion des populations surabondantes ou susceptibles d?une trop grande expansion, dans le but de prévenir les effets néfastes de la surpopulation. Du fait des différences tant quantitatives que qualitatives entre protection des petites populations et contrôle des grandes, il est nécessaire de disposer de modèles et de méthodes distinctes. L?objectif de ce travail a été de développer des modèles prédictifs de la dynamique des grandes populations, ainsi que des logiciels permettant de calculer les paramètres de ces modèles et de tester des scénarios de gestion. Le cas du Bouquetin des Alpes (Capra ibex ibex) - en forte expansion en Suisse depuis sa réintroduction au début du XXème siècle - servit d?exemple. Cette tâche fut accomplie en trois étapes : En premier lieu, un modèle de dynamique locale, spécifique au Bouquetin, fut développé : le modèle sous-jacent - structuré en classes d?âge et de sexe - est basé sur une matrice de Leslie à laquelle ont été ajoutées la densité-dépendance, la stochasticité environnementale et la chasse de régulation. Ce modèle fut implémenté dans un logiciel d?aide à la gestion - nommé SIM-Ibex - permettant la maintenance de données de recensements, l?estimation automatisée des paramètres, ainsi que l?ajustement et la simulation de stratégies de régulation. Mais la dynamique d?une population est influencée non seulement par des facteurs démographiques, mais aussi par la dispersion et la colonisation de nouveaux espaces. Il est donc nécessaire de pouvoir modéliser tant la qualité de l?habitat que les obstacles à la dispersion. Une collection de logiciels - nommée Biomapper - fut donc développée. Son module central est basé sur l?Analyse Factorielle de la Niche Ecologique (ENFA) dont le principe est de calculer des facteurs de marginalité et de spécialisation de la niche écologique à partir de prédicteurs environnementaux et de données d?observation de l?espèce. Tous les modules de Biomapper sont liés aux Systèmes d?Information Géographiques (SIG) ; ils couvrent toutes les opérations d?importation des données, préparation des prédicteurs, ENFA et calcul de la carte de qualité d?habitat, validation et traitement des résultats ; un module permet également de cartographier les barrières et les corridors de dispersion. Le domaine d?application de l?ENFA fut exploré par le biais d?une distribution d?espèce virtuelle. La comparaison à une méthode couramment utilisée pour construire des cartes de qualité d?habitat, le Modèle Linéaire Généralisé (GLM), montra qu?elle était particulièrement adaptée pour les espèces cryptiques ou en cours d?expansion. Les informations sur la démographie et le paysage furent finalement fusionnées en un modèle global. Une approche basée sur un automate cellulaire fut choisie, tant pour satisfaire aux contraintes du réalisme de la modélisation du paysage qu?à celles imposées par les grandes populations : la zone d?étude est modélisée par un pavage de cellules hexagonales, chacune caractérisée par des propriétés - une capacité de soutien et six taux d?imperméabilité quantifiant les échanges entre cellules adjacentes - et une variable, la densité de la population. Cette dernière varie en fonction de la reproduction et de la survie locale, ainsi que de la dispersion, sous l?influence de la densité-dépendance et de la stochasticité. Un logiciel - nommé HexaSpace - fut développé pour accomplir deux fonctions : 1° Calibrer l?automate sur la base de modèles de dynamique (par ex. calculés par SIM-Ibex) et d?une carte de qualité d?habitat (par ex. calculée par Biomapper). 2° Faire tourner des simulations. Il permet d?étudier l?expansion d?une espèce envahisseuse dans un paysage complexe composé de zones de qualité diverses et comportant des obstacles à la dispersion. Ce modèle fut appliqué à l?histoire de la réintroduction du Bouquetin dans les Alpes bernoises (Suisse). SIM-Ibex est actuellement utilisé par les gestionnaires de la faune et par les inspecteurs du gouvernement pour préparer et contrôler les plans de tir. Biomapper a été appliqué à plusieurs espèces (tant végétales qu?animales) à travers le Monde. De même, même si HexaSpace fut initialement conçu pour des espèces animales terrestres, il pourrait aisément être étndu à la propagation de plantes ou à la dispersion d?animaux volants. Ces logiciels étant conçus pour, à partir de données brutes, construire un modèle réaliste complexe, et du fait qu?ils sont dotés d?une interface d?utilisation intuitive, ils sont susceptibles de nombreuses applications en biologie de la conservation. En outre, ces approches peuvent également s?appliquer à des questions théoriques dans les domaines de l?écologie des populations et du paysage.<br/><br/>Conservation biology is commonly associated to small and endangered population protection. Nevertheless, large or potentially large populations may also need human management to prevent negative effects of overpopulation. As there are both qualitative and quantitative differences between small population protection and large population controlling, distinct methods and models are needed. The aim of this work was to develop theoretical models to predict large population dynamics, as well as computer tools to assess the parameters of these models and to test management scenarios. The alpine Ibex (Capra ibex ibex) - which experienced a spectacular increase since its reintroduction in Switzerland at the beginning of the 20th century - was used as paradigm species. This task was achieved in three steps: A local population dynamics model was first developed specifically for Ibex: the underlying age- and sex-structured model is based on a Leslie matrix approach with addition of density-dependence, environmental stochasticity and culling. This model was implemented into a management-support software - named SIM-Ibex - allowing census data maintenance, parameter automated assessment and culling strategies tuning and simulating. However population dynamics is driven not only by demographic factors, but also by dispersal and colonisation of new areas. Habitat suitability and obstacles modelling had therefore to be addressed. Thus, a software package - named Biomapper - was developed. Its central module is based on the Ecological Niche Factor Analysis (ENFA) whose principle is to compute niche marginality and specialisation factors from a set of environmental predictors and species presence data. All Biomapper modules are linked to Geographic Information Systems (GIS); they cover all operations of data importation, predictor preparation, ENFA and habitat suitability map computation, results validation and further processing; a module also allows mapping of dispersal barriers and corridors. ENFA application domain was then explored by means of a simulated species distribution. It was compared to a common habitat suitability assessing method, the Generalised Linear Model (GLM), and was proven better suited for spreading or cryptic species. Demography and landscape informations were finally merged into a global model. To cope with landscape realism and technical constraints of large population modelling, a cellular automaton approach was chosen: the study area is modelled by a lattice of hexagonal cells, each one characterised by a few fixed properties - a carrying capacity and six impermeability rates quantifying exchanges between adjacent cells - and one variable, population density. The later varies according to local reproduction/survival and dispersal dynamics, modified by density-dependence and stochasticity. A software - named HexaSpace - was developed, which achieves two functions: 1° Calibrating the automaton on the base of local population dynamics models (e.g., computed by SIM-Ibex) and a habitat suitability map (e.g. computed by Biomapper). 2° Running simulations. It allows studying the spreading of an invading species across a complex landscape made of variously suitable areas and dispersal barriers. This model was applied to the history of Ibex reintroduction in Bernese Alps (Switzerland). SIM-Ibex is now used by governmental wildlife managers to prepare and verify culling plans. Biomapper has been applied to several species (both plants and animals) all around the World. In the same way, whilst HexaSpace was originally designed for terrestrial animal species, it could be easily extended to model plant propagation or flying animals dispersal. As these softwares were designed to proceed from low-level data to build a complex realistic model and as they benefit from an intuitive user-interface, they may have many conservation applications. Moreover, theoretical questions in the fields of population and landscape ecology might also be addressed by these approaches.
Resumo:
Un système efficace de sismique tridimensionnelle (3-D) haute-résolution adapté à des cibles lacustres de petite échelle a été développé. Dans le Lac Léman, près de la ville de Lausanne, en Suisse, des investigations récentes en deux dimension (2-D) ont mis en évidence une zone de faille complexe qui a été choisie pour tester notre système. Les structures observées incluent une couche mince (<40 m) de sédiments quaternaires sub-horizontaux, discordants sur des couches tertiaires de molasse pentées vers le sud-est. On observe aussi la zone de faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau de la Molasse Subalpine. Deux campagnes 3-D complètes, d?environ d?un kilomètre carré, ont été réalisées sur ce site de test. La campagne pilote (campagne I), effectuée en 1999 pendant 8 jours, a couvert 80 profils en utilisant une seule flûte. Pendant la campagne II (9 jours en 2001), le nouveau système trois-flûtes, bien paramétrés pour notre objectif, a permis l?acquisition de données de très haute qualité sur 180 lignes CMP. Les améliorations principales incluent un système de navigation et de déclenchement de tirs grâce à un nouveau logiciel. Celui-ci comprend un contrôle qualité de la navigation du bateau en temps réel utilisant un GPS différentiel (dGPS) à bord et une station de référence près du bord du lac. De cette façon, les tirs peuvent être déclenchés tous les 5 mètres avec une erreur maximale non-cumulative de 25 centimètres. Tandis que pour la campagne I la position des récepteurs de la flûte 48-traces a dû être déduite à partir des positions du bateau, pour la campagne II elle ont pu être calculées précisément (erreur <20 cm) grâce aux trois antennes dGPS supplémentaires placées sur des flotteurs attachés à l?extrémité de chaque flûte 24-traces. Il est maintenant possible de déterminer la dérive éventuelle de l?extrémité des flûtes (75 m) causée par des courants latéraux ou de petites variations de trajet du bateau. De plus, la construction de deux bras télescopiques maintenant les trois flûtes à une distance de 7.5 m les uns des autres, qui est la même distance que celle entre les lignes naviguées de la campagne II. En combinaison avec un espacement de récepteurs de 2.5 m, la dimension de chaque «bin» de données 3-D de la campagne II est de 1.25 m en ligne et 3.75 m latéralement. L?espacement plus grand en direction « in-line » par rapport à la direction «cross-line» est justifié par l?orientation structurale de la zone de faille perpendiculaire à la direction «in-line». L?incertitude sur la navigation et le positionnement pendant la campagne I et le «binning» imprécis qui en résulte, se retrouve dans les données sous forme d?une certaine discontinuité des réflecteurs. L?utilisation d?un canon à air à doublechambre (qui permet d?atténuer l?effet bulle) a pu réduire l?aliasing observé dans les sections migrées en 3-D. Celui-ci était dû à la combinaison du contenu relativement haute fréquence (<2000 Hz) du canon à eau (utilisé à 140 bars et à 0.3 m de profondeur) et d?un pas d?échantillonnage latéral insuffisant. Le Mini G.I 15/15 a été utilisé à 80 bars et à 1 m de profondeur, est mieux adapté à la complexité de la cible, une zone faillée ayant des réflecteurs pentés jusqu?à 30°. Bien que ses fréquences ne dépassent pas les 650 Hz, cette source combine une pénétration du signal non-aliasé jusqu?à 300 m dans le sol (par rapport au 145 m pour le canon à eau) pour une résolution verticale maximale de 1.1 m. Tandis que la campagne I a été acquise par groupes de plusieurs lignes de directions alternées, l?optimisation du temps d?acquisition du nouveau système à trois flûtes permet l?acquisition en géométrie parallèle, ce qui est préférable lorsqu?on utilise une configuration asymétrique (une source et un dispositif de récepteurs). Si on ne procède pas ainsi, les stacks sont différents selon la direction. Toutefois, la configuration de flûtes, plus courtes que pour la compagne I, a réduit la couverture nominale, la ramenant de 12 à 6. Une séquence classique de traitement 3-D a été adaptée à l?échantillonnage à haute fréquence et elle a été complétée par deux programmes qui transforment le format non-conventionnel de nos données de navigation en un format standard de l?industrie. Dans l?ordre, le traitement comprend l?incorporation de la géométrie, suivi de l?édition des traces, de l?harmonisation des «bins» (pour compenser l?inhomogénéité de la couverture due à la dérive du bateau et de la flûte), de la correction de la divergence sphérique, du filtrage passe-bande, de l?analyse de vitesse, de la correction DMO en 3-D, du stack et enfin de la migration 3-D en temps. D?analyses de vitesse détaillées ont été effectuées sur les données de couverture 12, une ligne sur deux et tous les 50 CMP, soit un nombre total de 600 spectres de semblance. Selon cette analyse, les vitesses d?intervalles varient de 1450-1650 m/s dans les sédiments non-consolidés et de 1650-3000 m/s dans les sédiments consolidés. Le fait que l?on puisse interpréter plusieurs horizons et surfaces de faille dans le cube, montre le potentiel de cette technique pour une interprétation tectonique et géologique à petite échelle en trois dimensions. On distingue cinq faciès sismiques principaux et leurs géométries 3-D détaillées sur des sections verticales et horizontales: les sédiments lacustres (Holocène), les sédiments glacio-lacustres (Pléistocène), la Molasse du Plateau, la Molasse Subalpine de la zone de faille (chevauchement) et la Molasse Subalpine au sud de cette zone. Les couches de la Molasse du Plateau et de la Molasse Subalpine ont respectivement un pendage de ~8° et ~20°. La zone de faille comprend de nombreuses structures très déformées de pendage d?environ 30°. Des tests préliminaires avec un algorithme de migration 3-D en profondeur avant sommation et à amplitudes préservées démontrent que la qualité excellente des données de la campagne II permet l?application de telles techniques à des campagnes haute-résolution. La méthode de sismique marine 3-D était utilisée jusqu?à présent quasi-exclusivement par l?industrie pétrolière. Son adaptation à une échelle plus petite géographiquement mais aussi financièrement a ouvert la voie d?appliquer cette technique à des objectifs d?environnement et du génie civil.<br/><br/>An efficient high-resolution three-dimensional (3-D) seismic reflection system for small-scale targets in lacustrine settings was developed. In Lake Geneva, near the city of Lausanne, Switzerland, past high-resolution two-dimensional (2-D) investigations revealed a complex fault zone (the Paudèze thrust zone), which was subsequently chosen for testing our system. Observed structures include a thin (<40 m) layer of subhorizontal Quaternary sediments that unconformably overlie southeast-dipping Tertiary Molasse beds and the Paudèze thrust zone, which separates Plateau and Subalpine Molasse units. Two complete 3-D surveys have been conducted over this same test site, covering an area of about 1 km2. In 1999, a pilot survey (Survey I), comprising 80 profiles, was carried out in 8 days with a single-streamer configuration. In 2001, a second survey (Survey II) used a newly developed three-streamer system with optimized design parameters, which provided an exceptionally high-quality data set of 180 common midpoint (CMP) lines in 9 days. The main improvements include a navigation and shot-triggering system with in-house navigation software that automatically fires the gun in combination with real-time control on navigation quality using differential GPS (dGPS) onboard and a reference base near the lake shore. Shots were triggered at 5-m intervals with a maximum non-cumulative error of 25 cm. Whereas the single 48-channel streamer system of Survey I requires extrapolation of receiver positions from the boat position, for Survey II they could be accurately calculated (error <20 cm) with the aid of three additional dGPS antennas mounted on rafts attached to the end of each of the 24- channel streamers. Towed at a distance of 75 m behind the vessel, they allow the determination of feathering due to cross-line currents or small course variations. Furthermore, two retractable booms hold the three streamers at a distance of 7.5 m from each other, which is the same distance as the sail line interval for Survey I. With a receiver spacing of 2.5 m, the bin dimension of the 3-D data of Survey II is 1.25 m in in-line direction and 3.75 m in cross-line direction. The greater cross-line versus in-line spacing is justified by the known structural trend of the fault zone perpendicular to the in-line direction. The data from Survey I showed some reflection discontinuity as a result of insufficiently accurate navigation and positioning and subsequent binning errors. Observed aliasing in the 3-D migration was due to insufficient lateral sampling combined with the relatively high frequency (<2000 Hz) content of the water gun source (operated at 140 bars and 0.3 m depth). These results motivated the use of a double-chamber bubble-canceling air gun for Survey II. A 15 / 15 Mini G.I air gun operated at 80 bars and 1 m depth, proved to be better adapted for imaging the complexly faulted target area, which has reflectors dipping up to 30°. Although its frequencies do not exceed 650 Hz, this air gun combines a penetration of non-aliased signal to depths of 300 m below the water bottom (versus 145 m for the water gun) with a maximum vertical resolution of 1.1 m. While Survey I was shot in patches of alternating directions, the optimized surveying time of the new threestreamer system allowed acquisition in parallel geometry, which is preferable when using an asymmetric configuration (single source and receiver array). Otherwise, resulting stacks are different for the opposite directions. However, the shorter streamer configuration of Survey II reduced the nominal fold from 12 to 6. A 3-D conventional processing flow was adapted to the high sampling rates and was complemented by two computer programs that format the unconventional navigation data to industry standards. Processing included trace editing, geometry assignment, bin harmonization (to compensate for uneven fold due to boat/streamer drift), spherical divergence correction, bandpass filtering, velocity analysis, 3-D DMO correction, stack and 3-D time migration. A detailed semblance velocity analysis was performed on the 12-fold data set for every second in-line and every 50th CMP, i.e. on a total of 600 spectra. According to this velocity analysis, interval velocities range from 1450-1650 m/s for the unconsolidated sediments and from 1650-3000 m/s for the consolidated sediments. Delineation of several horizons and fault surfaces reveal the potential for small-scale geologic and tectonic interpretation in three dimensions. Five major seismic facies and their detailed 3-D geometries can be distinguished in vertical and horizontal sections: lacustrine sediments (Holocene) , glaciolacustrine sediments (Pleistocene), Plateau Molasse, Subalpine Molasse and its thrust fault zone. Dips of beds within Plateau and Subalpine Molasse are ~8° and ~20°, respectively. Within the fault zone, many highly deformed structures with dips around 30° are visible. Preliminary tests with 3-D preserved-amplitude prestack depth migration demonstrate that the excellent data quality of Survey II allows application of such sophisticated techniques even to high-resolution seismic surveys. In general, the adaptation of the 3-D marine seismic reflection method, which to date has almost exclusively been used by the oil exploration industry, to a smaller geographical as well as financial scale has helped pave the way for applying this technique to environmental and engineering purposes.<br/><br/>La sismique réflexion est une méthode d?investigation du sous-sol avec un très grand pouvoir de résolution. Elle consiste à envoyer des vibrations dans le sol et à recueillir les ondes qui se réfléchissent sur les discontinuités géologiques à différentes profondeurs et remontent ensuite à la surface où elles sont enregistrées. Les signaux ainsi recueillis donnent non seulement des informations sur la nature des couches en présence et leur géométrie, mais ils permettent aussi de faire une interprétation géologique du sous-sol. Par exemple, dans le cas de roches sédimentaires, les profils de sismique réflexion permettent de déterminer leur mode de dépôt, leurs éventuelles déformations ou cassures et donc leur histoire tectonique. La sismique réflexion est la méthode principale de l?exploration pétrolière. Pendant longtemps on a réalisé des profils de sismique réflexion le long de profils qui fournissent une image du sous-sol en deux dimensions. Les images ainsi obtenues ne sont que partiellement exactes, puisqu?elles ne tiennent pas compte de l?aspect tridimensionnel des structures géologiques. Depuis quelques dizaines d?années, la sismique en trois dimensions (3-D) a apporté un souffle nouveau à l?étude du sous-sol. Si elle est aujourd?hui parfaitement maîtrisée pour l?imagerie des grandes structures géologiques tant dans le domaine terrestre que le domaine océanique, son adaptation à l?échelle lacustre ou fluviale n?a encore fait l?objet que de rares études. Ce travail de thèse a consisté à développer un système d?acquisition sismique similaire à celui utilisé pour la prospection pétrolière en mer, mais adapté aux lacs. Il est donc de dimension moindre, de mise en oeuvre plus légère et surtout d?une résolution des images finales beaucoup plus élevée. Alors que l?industrie pétrolière se limite souvent à une résolution de l?ordre de la dizaine de mètres, l?instrument qui a été mis au point dans le cadre de ce travail permet de voir des détails de l?ordre du mètre. Le nouveau système repose sur la possibilité d?enregistrer simultanément les réflexions sismiques sur trois câbles sismiques (ou flûtes) de 24 traces chacun. Pour obtenir des données 3-D, il est essentiel de positionner les instruments sur l?eau (source et récepteurs des ondes sismiques) avec une grande précision. Un logiciel a été spécialement développé pour le contrôle de la navigation et le déclenchement des tirs de la source sismique en utilisant des récepteurs GPS différentiel (dGPS) sur le bateau et à l?extrémité de chaque flûte. Ceci permet de positionner les instruments avec une précision de l?ordre de 20 cm. Pour tester notre système, nous avons choisi une zone sur le Lac Léman, près de la ville de Lausanne, où passe la faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau et de la Molasse Subalpine. Deux campagnes de mesures de sismique 3-D y ont été réalisées sur une zone d?environ 1 km2. Les enregistrements sismiques ont ensuite été traités pour les transformer en images interprétables. Nous avons appliqué une séquence de traitement 3-D spécialement adaptée à nos données, notamment en ce qui concerne le positionnement. Après traitement, les données font apparaître différents faciès sismiques principaux correspondant notamment aux sédiments lacustres (Holocène), aux sédiments glacio-lacustres (Pléistocène), à la Molasse du Plateau, à la Molasse Subalpine de la zone de faille et la Molasse Subalpine au sud de cette zone. La géométrie 3-D détaillée des failles est visible sur les sections sismiques verticales et horizontales. L?excellente qualité des données et l?interprétation de plusieurs horizons et surfaces de faille montrent le potentiel de cette technique pour les investigations à petite échelle en trois dimensions ce qui ouvre des voies à son application dans les domaines de l?environnement et du génie civil.
Resumo:
In distributed energy production, permanent magnet synchronous generators (PMSG) are often connected to the grid via frequency converters, such as voltage source line converters. The price of the converter may constitute a large part of the costs of a generating set. Some of the permanent magnet synchronous generators with converters and traditional separately excited synchronous generators couldbe replaced by direct-on-line (DOL) non-controlled PMSGs. Small directly networkconnected generators are likely to have large markets in the area of distributed electric energy generation. Typical prime movers could be windmills, watermills and internal combustion engines. DOL PMSGs could also be applied in island networks, such as ships and oil platforms. Also various back-up power generating systems could be carried out with DOL PMSGs. The benefits would be a lower priceof the generating set and the robustness and easy use of the system. The performance of DOL PMSGs is analyzed. The electricity distribution companies have regulations that constrain the design of the generators being connected to the grid. The general guidelines and recommendations are applied in the analysis. By analyzing the results produced by the simulation model for the permanent magnet machine, the guidelines for efficient damper winding parameters for DOL PMSGs are presented. The simulation model is used to simulate grid connections and load transients. The damper winding parameters are calculated by the finite element method (FEM) and determined from experimental measurements. Three-dimensional finite element analysis (3D FEA) is carried out. The results from the simulation model and 3D FEA are compared with practical measurements from two prototype axial flux permanent magnet generators provided with damper windings. The dimensioning of the damper winding parameters is case specific. The damper winding should be dimensioned based on the moment of inertia of the generating set. It is shown that the damper winding has optimal values to reach synchronous operation in the shortest period of time after transient operation. With optimal dimensioning, interferenceon the grid is minimized.
Resumo:
The present study focuses on two effects of the presence of a noncondensable gas on the thermal-hydraulic behavior of thecoolant of the primary circuit of a nuclear reactor in the VVER-440 geometry inabnormal situations. First, steam condensation with the presence of air was studied in the horizontal tubes of the steam generator (SG) of the PACTEL test facility. The French thermal-hydraulic CATHARE code was used to study the heat transfer between the primary and secondary side in conditions derived from preliminary experiments performed by VTT using PACTEL. In natural circulation and single-phase vapor conditions, the injection of a volume of air, equivalent to the totalvolume of the primary side of the SG at the entrance of the hot collector, did not stop the heat transfer from the primary to the secondary side. The calculated results indicate that air is located in the second half-length (from the mid-length of the tubes to the cold collector) in all the tubes of the steam generator The hot collector remained full of steam during the transient. Secondly, the potential release of the nitrogen gas dissolved in the water of the accumulators of the emergency core coolant system of the Loviisa nuclear power plant (NPP) was investigated. The author implemented a model of the dissolution and release ofnitrogen gas in the CATHARE code; the model created by the CATHARE developers. In collaboration with VTT, an analytical experiment was performed with some components of PACTEL to determine, in particular, the value of the release time constant of the nitrogen gas in the depressurization conditions representative of the small and intermediate break transients postulated for the Loviisa NPP. Such transients, with simplified operating procedures, were calculated using the modified CATHARE code for various values of the release time constant used in the dissolution and release model. For the small breaks, nitrogen gas is trapped in thecollectors of the SGs in rather large proportions. There, the levels oscillate until the actuation of the low-pressure injection pumps (LPIS) that refill the primary circuit. In the case of the intermediate breaks, most of the nitrogen gas is expelled at the break and almost no nitrogen gas is trapped in the SGs. In comparison with the cases calculated without taking into account the release of nitrogen gas, the start of the LPIS is delayed by between 1 and 1.75 h. Applicability of the obtained results to the real safety conditions must take into accountthe real operating procedures used in the nuclear power plant.
Resumo:
The amphibian micronucleus test has been widely used during the last 30 years to test the genotoxic properties of several chemicals and as a tool for ecogenotoxic monitoring. The vast majority of these studies were performed on peripheral blood of urodelan larvae and anuran tadpoles and to a lesser extent adults were also used. In this study, we developed protocols for measuring micronuclei in adult shed skin cells and larval gill cells of the Italian crested newt (Triturus carnifex). Amphibians were collected from ponds in two protected areas in Italy that differed in their radon content. Twenty-three adult newts and 31 larvae were captured from the radon-rich pond, while 20 adults and 27 larvae were taken from the radon-free site. The animals were brought to the laboratory and the micronucleus test was performed on peripheral blood and shed skins taken from the adults and on larval gills. Samples from the radon-rich site showed micronucleus frequencies higher than those from the radon-free site and the difference was statistically significant in gill cells (P < 0.00001). Moreover, the larval gills seem to be more sensitive than the adult tissues. This method represents an easy (and noninvasive in the case of the shed skin) application of the micronucleus assay that can be useful for environmental studies in situ. Environ. Mol. Mutagen. 56:412-417, 2015. © 2014 Wiley Periodicals, Inc.