989 resultados para quasiparticle configuration
Resumo:
Increasing evidence suggests that working memory and perceptual processes are dynamically interrelated due to modulating activity in overlapping brain networks. However, the direct influence of working memory on the spatio-temporal brain dynamics of behaviorally relevant intervening information remains unclear. To investigate this issue, subjects performed a visual proximity grid perception task under three different visual-spatial working memory (VSWM) load conditions. VSWM load was manipulated by asking subjects to memorize the spatial locations of 6 or 3 disks. The grid was always presented between the encoding and recognition of the disk pattern. As a baseline condition, grid stimuli were presented without a VSWM context. VSWM load altered both perceptual performance and neural networks active during intervening grid encoding. Participants performed faster and more accurately on a challenging perceptual task under high VSWM load as compared to the low load and the baseline condition. Visual evoked potential (VEP) analyses identified changes in the configuration of the underlying sources in one particular period occurring 160-190 ms post-stimulus onset. Source analyses further showed an occipito-parietal down-regulation concurrent to the increased involvement of temporal and frontal resources in the high VSWM context. Together, these data suggest that cognitive control mechanisms supporting working memory may selectively enhance concurrent visual processing related to an independent goal. More broadly, our findings are in line with theoretical models implicating the engagement of frontal regions in synchronizing and optimizing mnemonic and perceptual resources towards multiple goals.
Resumo:
Devant la multiplication de données, de classements, l'auteur interroge la configuration d'acteurs à l'origine de la problématisation d'indicateurs statistiques en grilles de lecture des inégalités d'accès à l'enseignement supérieur. La comparaison et la caractérisation des indicateurs d'inégalités d'accès des bases internationales (UNESCO, OCDE, EUROSTAT) et nationales (Allemagne, Angleterre, France et Suisse) questionnent la tension entre les discours et les indicateurs produits et leur inscription nationale. Qui dit quoi et avec quels résultats ? Quelles configurations d'acteurs caractérisent ces processus ? De quelles relations de pouvoir sont-ils le produit ? L'auteur retrace la mise à l'agenda des organismes européens du problème des inégalités d'accès au supérieur, identifie les politiques définies et les confronte aux indicateurs produits montrant la dissonance entre les discours et ce que les indicateurs permettent de problématiser, le décalage entre recommandations et outils. Pourquoi un tel contraste ? Quels sont les mécanismes à l'oeuvre ? Est-ce un problème technique, politique ? Que révèle cette dissonance des spécificités nationales dans la construction sociale des inégalités ?
Resumo:
Un système efficace de sismique tridimensionnelle (3-D) haute-résolution adapté à des cibles lacustres de petite échelle a été développé. Dans le Lac Léman, près de la ville de Lausanne, en Suisse, des investigations récentes en deux dimension (2-D) ont mis en évidence une zone de faille complexe qui a été choisie pour tester notre système. Les structures observées incluent une couche mince (<40 m) de sédiments quaternaires sub-horizontaux, discordants sur des couches tertiaires de molasse pentées vers le sud-est. On observe aussi la zone de faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau de la Molasse Subalpine. Deux campagnes 3-D complètes, d?environ d?un kilomètre carré, ont été réalisées sur ce site de test. La campagne pilote (campagne I), effectuée en 1999 pendant 8 jours, a couvert 80 profils en utilisant une seule flûte. Pendant la campagne II (9 jours en 2001), le nouveau système trois-flûtes, bien paramétrés pour notre objectif, a permis l?acquisition de données de très haute qualité sur 180 lignes CMP. Les améliorations principales incluent un système de navigation et de déclenchement de tirs grâce à un nouveau logiciel. Celui-ci comprend un contrôle qualité de la navigation du bateau en temps réel utilisant un GPS différentiel (dGPS) à bord et une station de référence près du bord du lac. De cette façon, les tirs peuvent être déclenchés tous les 5 mètres avec une erreur maximale non-cumulative de 25 centimètres. Tandis que pour la campagne I la position des récepteurs de la flûte 48-traces a dû être déduite à partir des positions du bateau, pour la campagne II elle ont pu être calculées précisément (erreur <20 cm) grâce aux trois antennes dGPS supplémentaires placées sur des flotteurs attachés à l?extrémité de chaque flûte 24-traces. Il est maintenant possible de déterminer la dérive éventuelle de l?extrémité des flûtes (75 m) causée par des courants latéraux ou de petites variations de trajet du bateau. De plus, la construction de deux bras télescopiques maintenant les trois flûtes à une distance de 7.5 m les uns des autres, qui est la même distance que celle entre les lignes naviguées de la campagne II. En combinaison avec un espacement de récepteurs de 2.5 m, la dimension de chaque «bin» de données 3-D de la campagne II est de 1.25 m en ligne et 3.75 m latéralement. L?espacement plus grand en direction « in-line » par rapport à la direction «cross-line» est justifié par l?orientation structurale de la zone de faille perpendiculaire à la direction «in-line». L?incertitude sur la navigation et le positionnement pendant la campagne I et le «binning» imprécis qui en résulte, se retrouve dans les données sous forme d?une certaine discontinuité des réflecteurs. L?utilisation d?un canon à air à doublechambre (qui permet d?atténuer l?effet bulle) a pu réduire l?aliasing observé dans les sections migrées en 3-D. Celui-ci était dû à la combinaison du contenu relativement haute fréquence (<2000 Hz) du canon à eau (utilisé à 140 bars et à 0.3 m de profondeur) et d?un pas d?échantillonnage latéral insuffisant. Le Mini G.I 15/15 a été utilisé à 80 bars et à 1 m de profondeur, est mieux adapté à la complexité de la cible, une zone faillée ayant des réflecteurs pentés jusqu?à 30°. Bien que ses fréquences ne dépassent pas les 650 Hz, cette source combine une pénétration du signal non-aliasé jusqu?à 300 m dans le sol (par rapport au 145 m pour le canon à eau) pour une résolution verticale maximale de 1.1 m. Tandis que la campagne I a été acquise par groupes de plusieurs lignes de directions alternées, l?optimisation du temps d?acquisition du nouveau système à trois flûtes permet l?acquisition en géométrie parallèle, ce qui est préférable lorsqu?on utilise une configuration asymétrique (une source et un dispositif de récepteurs). Si on ne procède pas ainsi, les stacks sont différents selon la direction. Toutefois, la configuration de flûtes, plus courtes que pour la compagne I, a réduit la couverture nominale, la ramenant de 12 à 6. Une séquence classique de traitement 3-D a été adaptée à l?échantillonnage à haute fréquence et elle a été complétée par deux programmes qui transforment le format non-conventionnel de nos données de navigation en un format standard de l?industrie. Dans l?ordre, le traitement comprend l?incorporation de la géométrie, suivi de l?édition des traces, de l?harmonisation des «bins» (pour compenser l?inhomogénéité de la couverture due à la dérive du bateau et de la flûte), de la correction de la divergence sphérique, du filtrage passe-bande, de l?analyse de vitesse, de la correction DMO en 3-D, du stack et enfin de la migration 3-D en temps. D?analyses de vitesse détaillées ont été effectuées sur les données de couverture 12, une ligne sur deux et tous les 50 CMP, soit un nombre total de 600 spectres de semblance. Selon cette analyse, les vitesses d?intervalles varient de 1450-1650 m/s dans les sédiments non-consolidés et de 1650-3000 m/s dans les sédiments consolidés. Le fait que l?on puisse interpréter plusieurs horizons et surfaces de faille dans le cube, montre le potentiel de cette technique pour une interprétation tectonique et géologique à petite échelle en trois dimensions. On distingue cinq faciès sismiques principaux et leurs géométries 3-D détaillées sur des sections verticales et horizontales: les sédiments lacustres (Holocène), les sédiments glacio-lacustres (Pléistocène), la Molasse du Plateau, la Molasse Subalpine de la zone de faille (chevauchement) et la Molasse Subalpine au sud de cette zone. Les couches de la Molasse du Plateau et de la Molasse Subalpine ont respectivement un pendage de ~8° et ~20°. La zone de faille comprend de nombreuses structures très déformées de pendage d?environ 30°. Des tests préliminaires avec un algorithme de migration 3-D en profondeur avant sommation et à amplitudes préservées démontrent que la qualité excellente des données de la campagne II permet l?application de telles techniques à des campagnes haute-résolution. La méthode de sismique marine 3-D était utilisée jusqu?à présent quasi-exclusivement par l?industrie pétrolière. Son adaptation à une échelle plus petite géographiquement mais aussi financièrement a ouvert la voie d?appliquer cette technique à des objectifs d?environnement et du génie civil.<br/><br/>An efficient high-resolution three-dimensional (3-D) seismic reflection system for small-scale targets in lacustrine settings was developed. In Lake Geneva, near the city of Lausanne, Switzerland, past high-resolution two-dimensional (2-D) investigations revealed a complex fault zone (the Paudèze thrust zone), which was subsequently chosen for testing our system. Observed structures include a thin (<40 m) layer of subhorizontal Quaternary sediments that unconformably overlie southeast-dipping Tertiary Molasse beds and the Paudèze thrust zone, which separates Plateau and Subalpine Molasse units. Two complete 3-D surveys have been conducted over this same test site, covering an area of about 1 km2. In 1999, a pilot survey (Survey I), comprising 80 profiles, was carried out in 8 days with a single-streamer configuration. In 2001, a second survey (Survey II) used a newly developed three-streamer system with optimized design parameters, which provided an exceptionally high-quality data set of 180 common midpoint (CMP) lines in 9 days. The main improvements include a navigation and shot-triggering system with in-house navigation software that automatically fires the gun in combination with real-time control on navigation quality using differential GPS (dGPS) onboard and a reference base near the lake shore. Shots were triggered at 5-m intervals with a maximum non-cumulative error of 25 cm. Whereas the single 48-channel streamer system of Survey I requires extrapolation of receiver positions from the boat position, for Survey II they could be accurately calculated (error <20 cm) with the aid of three additional dGPS antennas mounted on rafts attached to the end of each of the 24- channel streamers. Towed at a distance of 75 m behind the vessel, they allow the determination of feathering due to cross-line currents or small course variations. Furthermore, two retractable booms hold the three streamers at a distance of 7.5 m from each other, which is the same distance as the sail line interval for Survey I. With a receiver spacing of 2.5 m, the bin dimension of the 3-D data of Survey II is 1.25 m in in-line direction and 3.75 m in cross-line direction. The greater cross-line versus in-line spacing is justified by the known structural trend of the fault zone perpendicular to the in-line direction. The data from Survey I showed some reflection discontinuity as a result of insufficiently accurate navigation and positioning and subsequent binning errors. Observed aliasing in the 3-D migration was due to insufficient lateral sampling combined with the relatively high frequency (<2000 Hz) content of the water gun source (operated at 140 bars and 0.3 m depth). These results motivated the use of a double-chamber bubble-canceling air gun for Survey II. A 15 / 15 Mini G.I air gun operated at 80 bars and 1 m depth, proved to be better adapted for imaging the complexly faulted target area, which has reflectors dipping up to 30°. Although its frequencies do not exceed 650 Hz, this air gun combines a penetration of non-aliased signal to depths of 300 m below the water bottom (versus 145 m for the water gun) with a maximum vertical resolution of 1.1 m. While Survey I was shot in patches of alternating directions, the optimized surveying time of the new threestreamer system allowed acquisition in parallel geometry, which is preferable when using an asymmetric configuration (single source and receiver array). Otherwise, resulting stacks are different for the opposite directions. However, the shorter streamer configuration of Survey II reduced the nominal fold from 12 to 6. A 3-D conventional processing flow was adapted to the high sampling rates and was complemented by two computer programs that format the unconventional navigation data to industry standards. Processing included trace editing, geometry assignment, bin harmonization (to compensate for uneven fold due to boat/streamer drift), spherical divergence correction, bandpass filtering, velocity analysis, 3-D DMO correction, stack and 3-D time migration. A detailed semblance velocity analysis was performed on the 12-fold data set for every second in-line and every 50th CMP, i.e. on a total of 600 spectra. According to this velocity analysis, interval velocities range from 1450-1650 m/s for the unconsolidated sediments and from 1650-3000 m/s for the consolidated sediments. Delineation of several horizons and fault surfaces reveal the potential for small-scale geologic and tectonic interpretation in three dimensions. Five major seismic facies and their detailed 3-D geometries can be distinguished in vertical and horizontal sections: lacustrine sediments (Holocene) , glaciolacustrine sediments (Pleistocene), Plateau Molasse, Subalpine Molasse and its thrust fault zone. Dips of beds within Plateau and Subalpine Molasse are ~8° and ~20°, respectively. Within the fault zone, many highly deformed structures with dips around 30° are visible. Preliminary tests with 3-D preserved-amplitude prestack depth migration demonstrate that the excellent data quality of Survey II allows application of such sophisticated techniques even to high-resolution seismic surveys. In general, the adaptation of the 3-D marine seismic reflection method, which to date has almost exclusively been used by the oil exploration industry, to a smaller geographical as well as financial scale has helped pave the way for applying this technique to environmental and engineering purposes.<br/><br/>La sismique réflexion est une méthode d?investigation du sous-sol avec un très grand pouvoir de résolution. Elle consiste à envoyer des vibrations dans le sol et à recueillir les ondes qui se réfléchissent sur les discontinuités géologiques à différentes profondeurs et remontent ensuite à la surface où elles sont enregistrées. Les signaux ainsi recueillis donnent non seulement des informations sur la nature des couches en présence et leur géométrie, mais ils permettent aussi de faire une interprétation géologique du sous-sol. Par exemple, dans le cas de roches sédimentaires, les profils de sismique réflexion permettent de déterminer leur mode de dépôt, leurs éventuelles déformations ou cassures et donc leur histoire tectonique. La sismique réflexion est la méthode principale de l?exploration pétrolière. Pendant longtemps on a réalisé des profils de sismique réflexion le long de profils qui fournissent une image du sous-sol en deux dimensions. Les images ainsi obtenues ne sont que partiellement exactes, puisqu?elles ne tiennent pas compte de l?aspect tridimensionnel des structures géologiques. Depuis quelques dizaines d?années, la sismique en trois dimensions (3-D) a apporté un souffle nouveau à l?étude du sous-sol. Si elle est aujourd?hui parfaitement maîtrisée pour l?imagerie des grandes structures géologiques tant dans le domaine terrestre que le domaine océanique, son adaptation à l?échelle lacustre ou fluviale n?a encore fait l?objet que de rares études. Ce travail de thèse a consisté à développer un système d?acquisition sismique similaire à celui utilisé pour la prospection pétrolière en mer, mais adapté aux lacs. Il est donc de dimension moindre, de mise en oeuvre plus légère et surtout d?une résolution des images finales beaucoup plus élevée. Alors que l?industrie pétrolière se limite souvent à une résolution de l?ordre de la dizaine de mètres, l?instrument qui a été mis au point dans le cadre de ce travail permet de voir des détails de l?ordre du mètre. Le nouveau système repose sur la possibilité d?enregistrer simultanément les réflexions sismiques sur trois câbles sismiques (ou flûtes) de 24 traces chacun. Pour obtenir des données 3-D, il est essentiel de positionner les instruments sur l?eau (source et récepteurs des ondes sismiques) avec une grande précision. Un logiciel a été spécialement développé pour le contrôle de la navigation et le déclenchement des tirs de la source sismique en utilisant des récepteurs GPS différentiel (dGPS) sur le bateau et à l?extrémité de chaque flûte. Ceci permet de positionner les instruments avec une précision de l?ordre de 20 cm. Pour tester notre système, nous avons choisi une zone sur le Lac Léman, près de la ville de Lausanne, où passe la faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau et de la Molasse Subalpine. Deux campagnes de mesures de sismique 3-D y ont été réalisées sur une zone d?environ 1 km2. Les enregistrements sismiques ont ensuite été traités pour les transformer en images interprétables. Nous avons appliqué une séquence de traitement 3-D spécialement adaptée à nos données, notamment en ce qui concerne le positionnement. Après traitement, les données font apparaître différents faciès sismiques principaux correspondant notamment aux sédiments lacustres (Holocène), aux sédiments glacio-lacustres (Pléistocène), à la Molasse du Plateau, à la Molasse Subalpine de la zone de faille et la Molasse Subalpine au sud de cette zone. La géométrie 3-D détaillée des failles est visible sur les sections sismiques verticales et horizontales. L?excellente qualité des données et l?interprétation de plusieurs horizons et surfaces de faille montrent le potentiel de cette technique pour les investigations à petite échelle en trois dimensions ce qui ouvre des voies à son application dans les domaines de l?environnement et du génie civil.
Resumo:
Résumé: Dans le contexte d'un climat de plus en plus chaud, la localisation du pergélisol dans les terrains sédimentaires à forte déclivité et l'évaluation des mouvements de terrain qui y ont cours s'avèrent primordiales. S'insérant dans cette problématique, ce travail de thèse s'articule autour de deux axes de recherche différents. D'un point de vue statique, cette recherche propose une étude de la distribution et des caractéristiques du pergélisol dans les éboulis de la zone périglaciaire alpine. D'un point de vue dynamique, une analyse de l'influence des caractéristiques du pergélisol (teneur en glace, température du pergélisol, etc.) et des variations des températures de l'air et du sol sur les vitesses de fluage des corps sédimentaires gelés est effectuée. Afin de répondre à ce double objectif, l'approche "terrain" a été privilégiée. Pour déterminer la répartition et les caractéristiques du pergélisol, les méthodes traditionnelles de prospection du pergélisol ont été utilisées, à savoir la mesure de la température du sol à la base du manteau neigeux (BTS), la mesure de la température du sol en continu ainsi que la méthode géoélectrique. Les mouvements de terrain ont pour leur part été mesurés à l'aide d'un GPS différentiel. L'étude de la distribution du pergélisol a été effectuée dans une quinzaine d'éboulis situés dans les régions du Mont Gelé (Verbier-Nendaz) et d'Arolla principalement. Dans la plupart des cas, un pergélisol a pu être mis en évidence dans la partie inférieure des accumulations sédimentaires, alors que la partie médiane des éboulis n'est, le plus souvent, pas gelée. Si cette absence de pergélisol se prolonge parfois dans les portions sommitales des pentes, les mesures réalisées montrent que dans d'autres cas des sédiments gelés y sont à nouveau présents. Les résistivités électriques mesurées dans les portions gelées des éboulis étudiés sont dans la plupart des cas nettement inférieures à celles mesurées sur les glaciers rocheux. Des études préalables ont montré que des circulations d'air internes sont responsables de l'anomalie thermique négative et, lorsqu'il existe, du pergélisol que l'on trouve dans la partie inférieure d'éboulis situés plus de 1000 m plus bas que la limite inférieure régionale du pergélisol discontinu. L'étude de quatre sites de basse altitude (1400-1900 m), et notamment l'équipement du site de Dreveneuse (Préalpes Valaisannes) avec deux forages, des capteurs de température de surface et un anémomètre a permis de vérifier et de préciser le mécanisme de ventilation actif au sein des éboulis froids de basse altitude. Ce mécanisme fonctionne de la manière suivante: en hiver, l'air contenu dans l'éboulis, plus chaud et plus léger que l'air extérieur, monte à l'intérieur de l'accumulation sédimentaire et est expulsé dans ses parties sommitales. Cet effet de cheminée provoque une aspiration d'air froid à l'intérieur de la partie inférieure de l'éboulis, causant ainsi un sur-refroidissement marqué du terrain. En été, le mécanisme s'inverse, l'éboulis étant plus froid que l'air environnant. De l'air froid est alors expulsé au bas de la pente. Une ventilation ascendante hivernale a pu être mise en évidence dans certains des éboulis de haute altitude étudiés. Elle est probablement en grande partie responsable de la configuration particulière des zones gelées observées. Même si l'existence d'un effet de cheminée n'a pu être démontrée dans tous les cas, du fait notamment de la glace interstitielle qui entrave le cheminement de l'air, des indices laissant présager son possible fonctionnement existent dans la quasi totalité des éboulis étudiés. L'absence de pergélisol à des altitudes qui lui sont favorables pourrait en tous les cas s'expliquer par un réchauffement du terrain lié à des expulsions d'air relativement chaud. L'étude des mouvements de terrain a été effectuée sur une dizaine de sites, principalement sur des glaciers rocheux, mais également sur une moraine de poussée et - II - Résumé ? abstract quelques éboulis. Plusieurs glaciers rocheux présentent des formes de déstabilisation récente (niches d'arrachement, blocs basculés, apparition de la matrice fine à la surface, etc.), ce qui témoigne d'une récente accélération des vitesses de déplacement. Ce phénomène, qui semble général à l'échelle alpine, est probablement à mettre sur le compte du réchauffement du pergélisol depuis une vingtaine d'années. Les vitesses mesurées sur ces formations sont souvent plus élevées que les valeurs habituellement proposées dans la littérature. On note par ailleurs une forte variabilité inter-annuelle des vitesses, qui semblent dépendre de la variation de la température moyenne annuelle de surface. Abstract: In the context of a warmer climate, the localisation of permafrost in steep sedimentary terrain and the measurement of terrain movements that occur in these areas is of great importance. With respect to these problems, this PhD thesis follows two different research axes. From a static point of view, the research presents a study of the permafrost distribution and characteristics in the talus slopes of the alpine periglacial belt. From a dynamic point of view, an analysis of the influence of the permafrost characteristics (ice content, permafrost temperature, etc.) and air and soil temperature variations on the creep velocities of frozen sedimentary bodies is carried out. In order to attain this double objective, the "field" approach was favoured. To determine the distribution and the characteristics of permafrost, the traditional methods of permafrost prospecting were used, i.e. ground surface temperature measurements at the base of the snow cover (BTS), year-round ground temperature measurements and DC-resistivity prospecting. The terrain movements were measured using a differential GPS. The permafrost distribution study was carried out on 15 talus slopes located mainly in the Mont Gelé (Verbier-Nendaz) and Arolla areas (Swiss Alps). In most cases, permafrost was found in the lower part of the talus slope, whereas the medium part was free of ice. In some cases, the upper part of the talus is also free of permafrost, whereas in other cases permafrost is present. Electrical resistivities measured in the frozen parts of the studied talus are in most cases clearly lower than those measured on rock glaciers. Former studies have shown that internal air circulation is responsible for the negative thermal anomaly and, when it exists, the permafrost present in the lower part of talus slopes located more than 1000 m below the regional lower limit of discontinuous permafrost. The study of four low-altitude talus slopes (1400-1900 m), and notably the equipment of Dreveneuse field site (Valais Prealps) with two boreholes, surface temperature sensors and an anemometer permitted to verify and to detail the ventilation mechanism active in low altitude talus slopes. This mechanism works in the following way: in winter, the air contained in the block accumulation is warmer and lighter than the surrounding air and therefore moves upward in the talus and is expelled in its upper part. This chimney effect induces an aspiration of cold air in the interior of the lower part of talus, that causes a strong overcooling of the ground. In summer, the mechanism is reversed because the talus slope is colder than the surrounding air. Cold air is then expelled in the lower part of the slope. Evidence of ascending ventilation in wintertime could also be found in some of the studied high-altitude talus slopes. It is probably mainly responsible for the particular configuration of the observed frozen areas. Even if the existence of a chimney effect could not be demonstrated in all cases, notably because of interstitial ice that obstructs Résumé ? abstract - III - the air circulation, indices of its presence exist in nearly all the studied talus. The absence of permafrost at altitudes favourable to its presence could be explained, for example, by the terrain warming caused by expulsion of relatively warm air. Terrain movements were measured at about ten sites, mainly on rock glaciers, but also on a push moraine and some talus slopes. Field observations reveal that many rock glaciers display recent destabilization features (landslide scars, tilted blocks, presence of fine grained sediments at the surface, etc.) that indicate a probable recent acceleration of the creep velocities. This phenomenon, which seems to be widespread at the alpine scale, is probably linked to the permafrost warming during the last decades. The measured velocities are often higher than values usually proposed in the literature. In addition, strong inter-annual variations of the velocities were observed, which seems to depend on the mean annual ground temperature variations.
Resumo:
The aim of this thesis was to produce information for the estimation of the flow balance of wood resin in mechanical pulping and to demonstrate the possibilities for improving the efficiency of deresination in practice. It was observed that chemical changes in wood resin take place only during peroxide bleaching, a significant amount of water dispersed wood resin is retained in the pulp mat during dewatering and the amount of wood resin in the solid phase of the process filtrates is very small. On the basis of this information there exist three parameters related to behaviour of wood resin that determine the flow balance in the process: 1. The liberation of wood resin to the pulp water phase 2. Theretention of water dispersed wood resin in dewatering 3. The proportion of wood resin degraded in the peroxide bleaching The effect of different factors on these parameters was evaluated with the help of laboratory studies and a literature survey. Also, information related to the values of these parameters in existing processes was obtained in mill measurements. With the help of this information, it was possible to evaluate the deresination efficiency and the effect of different factors on this efficiency in a pulping plant that produced low-freeness mechanical pulp. This evaluation showed that the wood resin content of mechanical pulp can be significantly decreased if there exists, in the process, a peroxide bleaching and subsequent washing stage. In the case of an optimal process configuration, as high as a 85 percent deresination efficiency seems to be possible with a water usage level of 8 m3/o.d.t.
Resumo:
This thesis is about detection of local image features. The research topic belongs to the wider area of object detection, which is a machine vision and pattern recognition problem where an object must be detected (located) in an image. State-of-the-art object detection methods often divide the problem into separate interest point detection and local image description steps, but in this thesis a different technique is used, leading to higher quality image features which enable more precise localization. Instead of using interest point detection the landmark positions are marked manually. Therefore, the quality of the image features is not limited by the interest point detection phase and the learning of image features is simplified. The approach combines both interest point detection and local description into one phase for detection. Computational efficiency of the descriptor is therefore important, leaving out many of the commonly used descriptors as unsuitably heavy. Multiresolution Gabor features has been the main descriptor in this thesis and improving their efficiency is a significant part. Actual image features are formed from descriptors by using a classifierwhich can then recognize similar looking patches in new images. The main classifier is based on Gaussian mixture models. Classifiers are used in one-class classifier configuration where there are only positive training samples without explicit background class. The local image feature detection method has been tested with two freely available face detection databases and a proprietary license plate database. The localization performance was very good in these experiments. Other applications applying the same under-lying techniques are also presented, including object categorization and fault detection.
Resumo:
This thesis presents an alternative approach to the analytical design of surface-mounted axialflux permanent-magnet machines. Emphasis has been placed on the design of axial-flux machines with a one-rotor-two-stators configuration. The design model developed in this study incorporates facilities to include both the electromagnetic design and thermal design of the machine as well as to take into consideration the complexity of the permanent-magnet shapes, which is a typical requirement for the design of high-performance permanent-magnet motors. A prototype machine with rated 5 kW output power at 300 min-1 rotation speed has been designed and constructed for the purposesof ascertaining the results obtained from the analytical design model. A comparative study of low-speed axial-flux and low-speed radial-flux permanent-magnet machines is presented. The comparative study concentrates on 55 kW machines with rotation speeds 150 min-1, 300 min-1 and 600 min-1 and is based on calculated designs. A novel comparison method is introduced. The method takes into account the mechanical constraints of the machine and enables comparison of the designed machines, with respect to the volume, efficiency and cost aspects of each machine. It is shown that an axial-flux permanent-magnet machine with one-rotor-two-stators configuration has generally a weaker efficiency than a radial-flux permanent-magnet machine if for all designs the same electric loading, air-gap flux density and current density have been applied. On the other hand, axial-flux machines are usually smaller in volume, especially when compared to radial-flux machines for which the length ratio (axial length of stator stack vs. air-gap diameter)is below 0.5. The comparison results show also that radial-flux machines with alow number of pole pairs, p < 4, outperform the corresponding axial-flux machines.
Resumo:
The application of forced unsteady-state reactors in case of selective catalytic reduction of nitrogen oxides (NOx) with ammonia (NH3) is sustained by the fact that favorable temperature and composition distributions which cannot be achieved in any steady-state regime can be obtained by means of unsteady-state operations. In a normal way of operation the low exothermicity of the selective catalytic reduction (SCR) reaction (usually carried out in the range of 280-350°C) is not enough to maintain by itself the chemical reaction. A normal mode of operation usually requires supply of supplementary heat increasing in this way the overall process operation cost. Through forced unsteady-state operation, the main advantage that can be obtained when exothermic reactions take place is the possibility of trapping, beside the ammonia, the moving heat wave inside the catalytic bed. The unsteady state-operation enables the exploitation of the thermal storage capacity of the catalyticbed. The catalytic bed acts as a regenerative heat exchanger allowing auto-thermal behaviour when the adiabatic temperature rise is low. Finding the optimum reactor configuration, employing the most suitable operation model and identifying the reactor behavior are highly important steps in order to configure a proper device for industrial applications. The Reverse Flow Reactor (RFR) - a forced unsteady state reactor - corresponds to the above mentioned characteristics and may be employed as an efficient device for the treatment of dilute pollutant mixtures. As a main disadvantage, beside its advantages, the RFR presents the 'wash out' phenomena. This phenomenon represents emissions of unconverted reactants at every switch of the flow direction. As a consequence our attention was focused on finding an alternative reactor configuration for RFR which is not affected by the incontrollable emissions of unconverted reactants. In this respect the Reactor Network (RN) was investigated. Its configuration consists of several reactors connected in a closed sequence, simulating a moving bed by changing the reactants feeding position. In the RN the flow direction is maintained in the same way ensuring uniformcatalyst exploitation and in the same time the 'wash out' phenomena is annulated. The simulated moving bed (SMB) can operate in transient mode giving practically constant exit concentration and high conversion levels. The main advantage of the reactor network operation is emphasizedby the possibility to obtain auto-thermal behavior with nearly uniformcatalyst utilization. However, the reactor network presents only a small range of switching times which allow to reach and to maintain an ignited state. Even so a proper study of the complex behavior of the RN may give the necessary information to overcome all the difficulties that can appear in the RN operation. The unsteady-state reactors complexity arises from the fact that these reactor types are characterized by short contact times and complex interaction between heat and mass transportphenomena. Such complex interactions can give rise to a remarkable complex dynamic behavior characterized by a set of spatial-temporal patterns, chaotic changes in concentration and traveling waves of heat or chemical reactivity. The main efforts of the current research studies concern the improvement of contact modalities between reactants, the possibility of thermal wave storage inside the reactor and the improvement of the kinetic activity of the catalyst used. Paying attention to the above mentioned aspects is important when higher activity even at low feeding temperatures and low emissions of unconverted reactants are the main operation concerns. Also, the prediction of the reactor pseudo or steady-state performance (regarding the conversion, selectivity and thermal behavior) and the dynamicreactor response during exploitation are important aspects in finding the optimal control strategy for the forced unsteady state catalytic tubular reactors. The design of an adapted reactor requires knowledge about the influence of its operating conditions on the overall process performance and a precise evaluation of the operating parameters rage for which a sustained dynamic behavior is obtained. An apriori estimation of the system parameters result in diminution of the computational efforts. Usually the convergence of unsteady state reactor systems requires integration over hundreds of cycles depending on the initial guess of the parameter values. The investigation of various operation models and thermal transfer strategies give reliable means to obtain recuperative and regenerative devices which are capable to maintain an auto-thermal behavior in case of low exothermic reactions. In the present research work a gradual analysis of the SCR of NOx with ammonia process in forced unsteady-state reactors was realized. The investigation covers the presentationof the general problematic related to the effect of noxious emissions in the environment, the analysis of the suitable catalysts types for the process, the mathematical analysis approach for modeling and finding the system solutions and the experimental investigation of the device found to be more suitable for the present process. In order to gain information about the forced unsteady state reactor design, operation, important system parameters and their values, mathematical description, mathematicalmethod for solving systems of partial differential equations and other specific aspects, in a fast and easy way, and a case based reasoning (CBR) approach has been used. This approach, using the experience of past similarproblems and their adapted solutions, may provide a method for gaining informations and solutions for new problems related to the forced unsteady state reactors technology. As a consequence a CBR system was implemented and a corresponding tool was developed. Further on, grooving up the hypothesis of isothermal operation, the investigation by means of numerical simulation of the feasibility of the SCR of NOx with ammonia in the RFRand in the RN with variable feeding position was realized. The hypothesis of non-isothermal operation was taken into account because in our opinion ifa commercial catalyst is considered, is not possible to modify the chemical activity and its adsorptive capacity to improve the operation butis possible to change the operation regime. In order to identify the most suitable device for the unsteady state reduction of NOx with ammonia, considering the perspective of recuperative and regenerative devices, a comparative analysis of the above mentioned two devices performance was realized. The assumption of isothermal conditions in the beginningof the forced unsteadystate investigation allowed the simplification of the analysis enabling to focus on the impact of the conditions and mode of operation on the dynamic features caused by the trapping of one reactant in the reactor, without considering the impact of thermal effect on overall reactor performance. The non-isothermal system approach has been investigated in order to point out the important influence of the thermal effect on overall reactor performance, studying the possibility of RFR and RN utilization as recuperative and regenerative devices and the possibility of achieving a sustained auto-thermal behavior in case of lowexothermic reaction of SCR of NOx with ammonia and low temperature gasfeeding. Beside the influence of the thermal effect, the influence of the principal operating parameters, as switching time, inlet flow rate and initial catalyst temperature have been stressed. This analysis is important not only because it allows a comparison between the two devices and optimisation of the operation, but also the switching time is the main operating parameter. An appropriate choice of this parameter enables the fulfilment of the process constraints. The level of the conversions achieved, the more uniform temperature profiles, the uniformity ofcatalyst exploitation and the much simpler mode of operation imposed the RN as a much more suitable device for SCR of NOx with ammonia, in usual operation and also in the perspective of control strategy implementation. Theoretical simplified models have also been proposed in order to describe the forced unsteady state reactors performance and to estimate their internal temperature and concentration profiles. The general idea was to extend the study of catalytic reactor dynamics taking into account the perspectives that haven't been analyzed yet. The experimental investigation ofRN revealed a good agreement between the data obtained by model simulation and the ones obtained experimentally.
Resumo:
Zinc selenide is a prospective material for optoelectronics. The fabrication of ZnSebased light-emitting diodes is hindered by complexity of p-type doping of the component materials. The interaction between native and impurity defects, the tendency of doping impurity to form associative centres with native defects and the tendency to self-compensation are the main factors impeding effective control of the value and type of conductivity. The thesis is devoted to the study of the processes of interaction between native and impurity defects in zinc selenide. It is established that the Au impurity has the most prominent amphoteric properties in ZnSe among Cu, Ag and Au impurities, as it forms a great number of both Au; donors and Auz„ acceptors. Electrical measurements show that Ag and Au ions introduced into vacant sites of the Zn sublattice form simple single-charged Agz„+ and Auzn+ states with d1° electron configuration, while Cu ions can form both single-charged Cuz„ (d1) and double-charged Cuzr`+ (d`o) centres. Amphoteric properties of Ag and Au transition metals stimulated by time are found for the first time from both electrical and luminescent measurements. A model that explains the changes in electrical and luminescent parameters by displacement of Ag ions into interstitial sites due to lattice deformation forces is proposed. Formation of an Ag;-donor impurity band in ZnSe samples doped with Ag and stored at room temperature is also studied. Thus, the properties of the doped samples are modified due to large lattice relaxation during aging. This fact should be taken into account in optoelectronic applications of doped ZnSe and related compounds.
Resumo:
En este trabajo intentamos comprobar, entre otros factores, la configuración de medios innovadores a escala local, donde se forma una red de empresas, se aplica un saber tradicional, una cultura y un capital social que se adapta perfectamente a los mercados internacionales más competitivos. Concretamente, el municipio objeto de estudio, A Estrada (Pontevedra), presenta un desarrollo económico apoyado en la fabricación de muebles.
Resumo:
El modelo de lazareto pabellonario se configuró a partir de las bases científicas establecidas durante el proceso de reforma hospitalaria acaecido en Francia en el último tercio del setecientos. La adopción de soluciones que dieran forma a la nueva tipología cuarentenaria no vino dada precisamente por el ejemplo prestado por los nuevos tipos de hospital resultantes de este debate sino por el de otras instalaciones de cuarentena y, en general, de encierro, ya existentes. En este artículo se analizarán todos los factores que influyeron en la configuración de este modelo de lazaretos.
Resumo:
PURPOSE: To optimize and preliminarily evaluate a three-dimensional (3D) radial balanced steady-state free precession (bSSFP) arterial spin labeled (ASL) sequence for nonenhanced MR angiography (MRA) of the extracranial carotid arteries. MATERIALS AND METHODS: The carotid arteries of 13 healthy subjects and 2 patients were imaged on a 1.5 Tesla MRI system using an undersampled 3D radial bSSFP sequence providing a scan time of ∼4 min and 1 mm(3) isotropic resolution. A hybridized scheme that combined pseudocontinuous and pulsed ASL was used to maximize arterial coverage. The impact of a post label delay period, the sequence repetition time, and radiofrequency (RF) energy configuration of pseudocontinuous labeling on the display of the carotid arteries was assessed with contrast-to-noise ratio (CNR) measurements. Faster, higher undersampled 2 and 1 min scans were tested. RESULTS: Using hybridized ASL MRA and a 3D radial bSSFP trajectory, arterial CNR was maximized with a post label delay of 0.2 s, repetition times ≥ 2.5 s (P < 0.05), and by eliminating RF energy during the pseudocontinuous control phase (P < 0.001). With higher levels of undersampling, the carotid arteries were displayed in ≤ 2 min. CONCLUSION: Nonenhanced MRA using hybridized ASL with a 3D radial bSSFP trajectory can display long lengths of the carotid arteries with 1 mm(3) isotropic resolution. J. Magn. Reson. Imaging 2015;41:1150-1156. © 2014 Wiley Periodicals, Inc.
Resumo:
PURPOSE OF REVIEW: Multimodal monitoring (MMM) is routinely applied in neurointensive care. Unfortunately, there is no robust evidence on which MMM-derived physiologic variables are the most clinically relevant, how and when they should be monitored, and whether MMM impacts outcome. The complexity is even higher because once the data are continuously collected, interpretation and integration of these complex physiologic events into targeted individualized care is still embryonic. RECENT FINDINGS: Recent clinical investigation mainly focused on intracranial pressure, perfusion of the brain, and oxygen availability along with electrophysiology. Moreover, a series of articles reviewing the available evidence on all the MMM tools, giving practical recommendations for bedside MMM, has been published, along with other consensus documents on the role of neuromonitoring and electroencephalography in this setting. SUMMARY: MMM allows comprehensive exploration of the complex pathophysiology of acute brain damage and, depending on the different configuration of the pathological condition we are treating, the application of targeted individualized care. Unfortunately, we still lack robust evidence on how to better integrate MMM-derived information at the bedside to improve patient management. Advanced informatics is promising and may provide us a supportive tool to interpret physiologic events and guide pathophysiological-based therapeutic decisions.
Resumo:
A partir d'una aproximació a les dades estadístiques aparegudes en dues enquestes recents sobre el coneixement, l'ús i la configuració de les identitats lingüístiques en els joves, s'analitza la influència de l'escola en la reproducció lingüística d'una llengua minoritzada. S'observa que, mentre que el sistema educatiu té una gran importància en la reproducció dels coneixements tant orals com escrits de la llengua, disminueix la seva influència en la determinació dels usos lingüístics en llengua catalana, així com és nul el seu efecte en la configuració de la identitat lingüística catalana dels joves. A partir d'aquesta constatació, s'analitzen les possibles causes i es proposen alternatives que plantegen la configuració de la identitat lingüística dels joves com un projecte de futur.