959 resultados para STRUCTURAL DEVELOPMENT


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Un système efficace de sismique tridimensionnelle (3-D) haute-résolution adapté à des cibles lacustres de petite échelle a été développé. Dans le Lac Léman, près de la ville de Lausanne, en Suisse, des investigations récentes en deux dimension (2-D) ont mis en évidence une zone de faille complexe qui a été choisie pour tester notre système. Les structures observées incluent une couche mince (<40 m) de sédiments quaternaires sub-horizontaux, discordants sur des couches tertiaires de molasse pentées vers le sud-est. On observe aussi la zone de faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau de la Molasse Subalpine. Deux campagnes 3-D complètes, d?environ d?un kilomètre carré, ont été réalisées sur ce site de test. La campagne pilote (campagne I), effectuée en 1999 pendant 8 jours, a couvert 80 profils en utilisant une seule flûte. Pendant la campagne II (9 jours en 2001), le nouveau système trois-flûtes, bien paramétrés pour notre objectif, a permis l?acquisition de données de très haute qualité sur 180 lignes CMP. Les améliorations principales incluent un système de navigation et de déclenchement de tirs grâce à un nouveau logiciel. Celui-ci comprend un contrôle qualité de la navigation du bateau en temps réel utilisant un GPS différentiel (dGPS) à bord et une station de référence près du bord du lac. De cette façon, les tirs peuvent être déclenchés tous les 5 mètres avec une erreur maximale non-cumulative de 25 centimètres. Tandis que pour la campagne I la position des récepteurs de la flûte 48-traces a dû être déduite à partir des positions du bateau, pour la campagne II elle ont pu être calculées précisément (erreur <20 cm) grâce aux trois antennes dGPS supplémentaires placées sur des flotteurs attachés à l?extrémité de chaque flûte 24-traces. Il est maintenant possible de déterminer la dérive éventuelle de l?extrémité des flûtes (75 m) causée par des courants latéraux ou de petites variations de trajet du bateau. De plus, la construction de deux bras télescopiques maintenant les trois flûtes à une distance de 7.5 m les uns des autres, qui est la même distance que celle entre les lignes naviguées de la campagne II. En combinaison avec un espacement de récepteurs de 2.5 m, la dimension de chaque «bin» de données 3-D de la campagne II est de 1.25 m en ligne et 3.75 m latéralement. L?espacement plus grand en direction « in-line » par rapport à la direction «cross-line» est justifié par l?orientation structurale de la zone de faille perpendiculaire à la direction «in-line». L?incertitude sur la navigation et le positionnement pendant la campagne I et le «binning» imprécis qui en résulte, se retrouve dans les données sous forme d?une certaine discontinuité des réflecteurs. L?utilisation d?un canon à air à doublechambre (qui permet d?atténuer l?effet bulle) a pu réduire l?aliasing observé dans les sections migrées en 3-D. Celui-ci était dû à la combinaison du contenu relativement haute fréquence (<2000 Hz) du canon à eau (utilisé à 140 bars et à 0.3 m de profondeur) et d?un pas d?échantillonnage latéral insuffisant. Le Mini G.I 15/15 a été utilisé à 80 bars et à 1 m de profondeur, est mieux adapté à la complexité de la cible, une zone faillée ayant des réflecteurs pentés jusqu?à 30°. Bien que ses fréquences ne dépassent pas les 650 Hz, cette source combine une pénétration du signal non-aliasé jusqu?à 300 m dans le sol (par rapport au 145 m pour le canon à eau) pour une résolution verticale maximale de 1.1 m. Tandis que la campagne I a été acquise par groupes de plusieurs lignes de directions alternées, l?optimisation du temps d?acquisition du nouveau système à trois flûtes permet l?acquisition en géométrie parallèle, ce qui est préférable lorsqu?on utilise une configuration asymétrique (une source et un dispositif de récepteurs). Si on ne procède pas ainsi, les stacks sont différents selon la direction. Toutefois, la configuration de flûtes, plus courtes que pour la compagne I, a réduit la couverture nominale, la ramenant de 12 à 6. Une séquence classique de traitement 3-D a été adaptée à l?échantillonnage à haute fréquence et elle a été complétée par deux programmes qui transforment le format non-conventionnel de nos données de navigation en un format standard de l?industrie. Dans l?ordre, le traitement comprend l?incorporation de la géométrie, suivi de l?édition des traces, de l?harmonisation des «bins» (pour compenser l?inhomogénéité de la couverture due à la dérive du bateau et de la flûte), de la correction de la divergence sphérique, du filtrage passe-bande, de l?analyse de vitesse, de la correction DMO en 3-D, du stack et enfin de la migration 3-D en temps. D?analyses de vitesse détaillées ont été effectuées sur les données de couverture 12, une ligne sur deux et tous les 50 CMP, soit un nombre total de 600 spectres de semblance. Selon cette analyse, les vitesses d?intervalles varient de 1450-1650 m/s dans les sédiments non-consolidés et de 1650-3000 m/s dans les sédiments consolidés. Le fait que l?on puisse interpréter plusieurs horizons et surfaces de faille dans le cube, montre le potentiel de cette technique pour une interprétation tectonique et géologique à petite échelle en trois dimensions. On distingue cinq faciès sismiques principaux et leurs géométries 3-D détaillées sur des sections verticales et horizontales: les sédiments lacustres (Holocène), les sédiments glacio-lacustres (Pléistocène), la Molasse du Plateau, la Molasse Subalpine de la zone de faille (chevauchement) et la Molasse Subalpine au sud de cette zone. Les couches de la Molasse du Plateau et de la Molasse Subalpine ont respectivement un pendage de ~8° et ~20°. La zone de faille comprend de nombreuses structures très déformées de pendage d?environ 30°. Des tests préliminaires avec un algorithme de migration 3-D en profondeur avant sommation et à amplitudes préservées démontrent que la qualité excellente des données de la campagne II permet l?application de telles techniques à des campagnes haute-résolution. La méthode de sismique marine 3-D était utilisée jusqu?à présent quasi-exclusivement par l?industrie pétrolière. Son adaptation à une échelle plus petite géographiquement mais aussi financièrement a ouvert la voie d?appliquer cette technique à des objectifs d?environnement et du génie civil.<br/><br/>An efficient high-resolution three-dimensional (3-D) seismic reflection system for small-scale targets in lacustrine settings was developed. In Lake Geneva, near the city of Lausanne, Switzerland, past high-resolution two-dimensional (2-D) investigations revealed a complex fault zone (the Paudèze thrust zone), which was subsequently chosen for testing our system. Observed structures include a thin (<40 m) layer of subhorizontal Quaternary sediments that unconformably overlie southeast-dipping Tertiary Molasse beds and the Paudèze thrust zone, which separates Plateau and Subalpine Molasse units. Two complete 3-D surveys have been conducted over this same test site, covering an area of about 1 km2. In 1999, a pilot survey (Survey I), comprising 80 profiles, was carried out in 8 days with a single-streamer configuration. In 2001, a second survey (Survey II) used a newly developed three-streamer system with optimized design parameters, which provided an exceptionally high-quality data set of 180 common midpoint (CMP) lines in 9 days. The main improvements include a navigation and shot-triggering system with in-house navigation software that automatically fires the gun in combination with real-time control on navigation quality using differential GPS (dGPS) onboard and a reference base near the lake shore. Shots were triggered at 5-m intervals with a maximum non-cumulative error of 25 cm. Whereas the single 48-channel streamer system of Survey I requires extrapolation of receiver positions from the boat position, for Survey II they could be accurately calculated (error <20 cm) with the aid of three additional dGPS antennas mounted on rafts attached to the end of each of the 24- channel streamers. Towed at a distance of 75 m behind the vessel, they allow the determination of feathering due to cross-line currents or small course variations. Furthermore, two retractable booms hold the three streamers at a distance of 7.5 m from each other, which is the same distance as the sail line interval for Survey I. With a receiver spacing of 2.5 m, the bin dimension of the 3-D data of Survey II is 1.25 m in in-line direction and 3.75 m in cross-line direction. The greater cross-line versus in-line spacing is justified by the known structural trend of the fault zone perpendicular to the in-line direction. The data from Survey I showed some reflection discontinuity as a result of insufficiently accurate navigation and positioning and subsequent binning errors. Observed aliasing in the 3-D migration was due to insufficient lateral sampling combined with the relatively high frequency (<2000 Hz) content of the water gun source (operated at 140 bars and 0.3 m depth). These results motivated the use of a double-chamber bubble-canceling air gun for Survey II. A 15 / 15 Mini G.I air gun operated at 80 bars and 1 m depth, proved to be better adapted for imaging the complexly faulted target area, which has reflectors dipping up to 30°. Although its frequencies do not exceed 650 Hz, this air gun combines a penetration of non-aliased signal to depths of 300 m below the water bottom (versus 145 m for the water gun) with a maximum vertical resolution of 1.1 m. While Survey I was shot in patches of alternating directions, the optimized surveying time of the new threestreamer system allowed acquisition in parallel geometry, which is preferable when using an asymmetric configuration (single source and receiver array). Otherwise, resulting stacks are different for the opposite directions. However, the shorter streamer configuration of Survey II reduced the nominal fold from 12 to 6. A 3-D conventional processing flow was adapted to the high sampling rates and was complemented by two computer programs that format the unconventional navigation data to industry standards. Processing included trace editing, geometry assignment, bin harmonization (to compensate for uneven fold due to boat/streamer drift), spherical divergence correction, bandpass filtering, velocity analysis, 3-D DMO correction, stack and 3-D time migration. A detailed semblance velocity analysis was performed on the 12-fold data set for every second in-line and every 50th CMP, i.e. on a total of 600 spectra. According to this velocity analysis, interval velocities range from 1450-1650 m/s for the unconsolidated sediments and from 1650-3000 m/s for the consolidated sediments. Delineation of several horizons and fault surfaces reveal the potential for small-scale geologic and tectonic interpretation in three dimensions. Five major seismic facies and their detailed 3-D geometries can be distinguished in vertical and horizontal sections: lacustrine sediments (Holocene) , glaciolacustrine sediments (Pleistocene), Plateau Molasse, Subalpine Molasse and its thrust fault zone. Dips of beds within Plateau and Subalpine Molasse are ~8° and ~20°, respectively. Within the fault zone, many highly deformed structures with dips around 30° are visible. Preliminary tests with 3-D preserved-amplitude prestack depth migration demonstrate that the excellent data quality of Survey II allows application of such sophisticated techniques even to high-resolution seismic surveys. In general, the adaptation of the 3-D marine seismic reflection method, which to date has almost exclusively been used by the oil exploration industry, to a smaller geographical as well as financial scale has helped pave the way for applying this technique to environmental and engineering purposes.<br/><br/>La sismique réflexion est une méthode d?investigation du sous-sol avec un très grand pouvoir de résolution. Elle consiste à envoyer des vibrations dans le sol et à recueillir les ondes qui se réfléchissent sur les discontinuités géologiques à différentes profondeurs et remontent ensuite à la surface où elles sont enregistrées. Les signaux ainsi recueillis donnent non seulement des informations sur la nature des couches en présence et leur géométrie, mais ils permettent aussi de faire une interprétation géologique du sous-sol. Par exemple, dans le cas de roches sédimentaires, les profils de sismique réflexion permettent de déterminer leur mode de dépôt, leurs éventuelles déformations ou cassures et donc leur histoire tectonique. La sismique réflexion est la méthode principale de l?exploration pétrolière. Pendant longtemps on a réalisé des profils de sismique réflexion le long de profils qui fournissent une image du sous-sol en deux dimensions. Les images ainsi obtenues ne sont que partiellement exactes, puisqu?elles ne tiennent pas compte de l?aspect tridimensionnel des structures géologiques. Depuis quelques dizaines d?années, la sismique en trois dimensions (3-D) a apporté un souffle nouveau à l?étude du sous-sol. Si elle est aujourd?hui parfaitement maîtrisée pour l?imagerie des grandes structures géologiques tant dans le domaine terrestre que le domaine océanique, son adaptation à l?échelle lacustre ou fluviale n?a encore fait l?objet que de rares études. Ce travail de thèse a consisté à développer un système d?acquisition sismique similaire à celui utilisé pour la prospection pétrolière en mer, mais adapté aux lacs. Il est donc de dimension moindre, de mise en oeuvre plus légère et surtout d?une résolution des images finales beaucoup plus élevée. Alors que l?industrie pétrolière se limite souvent à une résolution de l?ordre de la dizaine de mètres, l?instrument qui a été mis au point dans le cadre de ce travail permet de voir des détails de l?ordre du mètre. Le nouveau système repose sur la possibilité d?enregistrer simultanément les réflexions sismiques sur trois câbles sismiques (ou flûtes) de 24 traces chacun. Pour obtenir des données 3-D, il est essentiel de positionner les instruments sur l?eau (source et récepteurs des ondes sismiques) avec une grande précision. Un logiciel a été spécialement développé pour le contrôle de la navigation et le déclenchement des tirs de la source sismique en utilisant des récepteurs GPS différentiel (dGPS) sur le bateau et à l?extrémité de chaque flûte. Ceci permet de positionner les instruments avec une précision de l?ordre de 20 cm. Pour tester notre système, nous avons choisi une zone sur le Lac Léman, près de la ville de Lausanne, où passe la faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau et de la Molasse Subalpine. Deux campagnes de mesures de sismique 3-D y ont été réalisées sur une zone d?environ 1 km2. Les enregistrements sismiques ont ensuite été traités pour les transformer en images interprétables. Nous avons appliqué une séquence de traitement 3-D spécialement adaptée à nos données, notamment en ce qui concerne le positionnement. Après traitement, les données font apparaître différents faciès sismiques principaux correspondant notamment aux sédiments lacustres (Holocène), aux sédiments glacio-lacustres (Pléistocène), à la Molasse du Plateau, à la Molasse Subalpine de la zone de faille et la Molasse Subalpine au sud de cette zone. La géométrie 3-D détaillée des failles est visible sur les sections sismiques verticales et horizontales. L?excellente qualité des données et l?interprétation de plusieurs horizons et surfaces de faille montrent le potentiel de cette technique pour les investigations à petite échelle en trois dimensions ce qui ouvre des voies à son application dans les domaines de l?environnement et du génie civil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Huntington's disease is an incurable neurodegenerative disease caused by inheritance of an expanded cytosine-adenine-guanine (CAG) trinucleotide repeat within the Huntingtin gene. Extensive volume loss and altered diffusion metrics in the basal ganglia, cortex and white matter are seen when patients with Huntington's disease (HD) undergo structural imaging, suggesting that changes in basal ganglia-cortical structural connectivity occur. The aims of this study were to characterise altered patterns of basal ganglia-cortical structural connectivity with high anatomical precision in premanifest and early manifest HD, and to identify associations between structural connectivity and genetic or clinical markers of HD. 3-Tesla diffusion tensor magnetic resonance images were acquired from 14 early manifest HD subjects, 17 premanifest HD subjects and 18 controls. Voxel-based analyses of probabilistic tractography were used to quantify basal ganglia-cortical structural connections. Canonical variate analysis was used to demonstrate disease-associated patterns of altered connectivity and to test for associations between connectivity and genetic and clinical markers of HD; this is the first study in which such analyses have been used. Widespread changes were seen in basal ganglia-cortical structural connectivity in early manifest HD subjects; this has relevance for development of therapies targeting the striatum. Premanifest HD subjects had a pattern of connectivity more similar to that of controls, suggesting progressive change in connections over time. Associations between structural connectivity patterns and motor and cognitive markers of disease severity were present in early manifest subjects. Our data suggest the clinical phenotype in manifest HD may be at least partly a result of altered connectivity. Hum Brain Mapp 36:1728-1740, 2015. © 2015 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[eng] We analyze the equilibrium of a multi-sector exogenous growth model where the introduction of minimum consumption requirements drives structural change. We show that equilibrium dynamics simultaneously exhibt structural change and balanced growth of aggregate variables as is observed in US when the initial intensity of minimum consumption requirements is sufficiently small. This intensity is measured by the ratio between the aggregate value of the minimum consumption requirements and GDP and, therefore, it is inversely related with the level of economic development. Initially rich economies benefit from an initially low intensity of the minimum consumption requirements and, as a consequence, these economies end up exhibiting balanced growth of aggregate variables, while there is structural change. In contrast, initially poor economies suffer from an initially large intensity of the minimum consumption requirements, which makes the growth of the aggregate variables unbalanced during a very large period. These economies may never exhibit simultaneously balanced growth of aggregate variables and structural change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Tandem-GMAW method is the latest development as the consequences of improvements in the welding methods. The twin-wire and then the Tandem-method with the separate power sources has got a remarkable place in the welding of many types of materials with different joint types. The biggest advantage of Tandem welding method is the flexibility of choosing both the electrodes of different types from each other according to the type of the parent material. This is possible because of the feasibility of setting the separate welding parameters for both the wires. In this thesis work the effect of the variation in three parameters on the weld bead in Tandem-GMA welding method is studied. Theses three parameters are the wire feed rate in the slave wire, the wire feed rate in the master wire and the voltage difference in both the wires. The results are then compared to study the behaviour of the weld bead with the change in these parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[eng] We analyze the equilibrium of a multi-sector exogenous growth model where the introduction of minimum consumption requirements drives structural change. We show that equilibrium dynamics simultaneously exhibt structural change and balanced growth of aggregate variables as is observed in US when the initial intensity of minimum consumption requirements is sufficiently small. This intensity is measured by the ratio between the aggregate value of the minimum consumption requirements and GDP and, therefore, it is inversely related with the level of economic development. Initially rich economies benefit from an initially low intensity of the minimum consumption requirements and, as a consequence, these economies end up exhibiting balanced growth of aggregate variables, while there is structural change. In contrast, initially poor economies suffer from an initially large intensity of the minimum consumption requirements, which makes the growth of the aggregate variables unbalanced during a very large period. These economies may never exhibit simultaneously balanced growth of aggregate variables and structural change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[spa] El estudio analiza la evolución de los gases de efecto invernadero (GEI) y las emisiones de acidificación para Italia durante el periodo 1995-2005. Los datos muestran que mientras las emisiones que contribuyen a la acidificación han disminuido constantemente, las emisiones de GEI han aumentado debido al aumento de dióxido de carbono. El objetivo de este estudio es poner de relieve cómo diferentes factores económicos, en particular el crecimiento económico, el desarrollo de una tecnología menos contaminante y la estructura del consumo, han impulsado la evolución de las emisiones. La metodología propuesta es un análisis de descomposición estructural (ADE), método que permite descomponer los cambios de la variable de interés entre las diferentes fuerzas y revelar la importancia de cada factor. Por otra parte, este estudio considera la importancia del comercio internacional e intenta incluir el “problema de la responsabilidad”. Es decir, a través de las relaciones comerciales internacionales, un país podría estar exportando procesos de producción contaminantes sin una reducción real de la contaminación implícita en su patrón de consumo. Con este fin, siguiendo primero un enfoque basado en la “responsabilidad del productor”, el ADE se aplica a las emisiones causadas por la producción nacional. Sucesivamente, el análisis se mueve hacia un enfoque basado en la “responsabilidad del consumidor" y la descomposición se aplica a las emisiones relacionadas con la producción nacional o la producción extranjera que satisface la demanda interna. De esta manera, el ejercicio permite una primera comprobación de la importancia del comercio internacional y pone de relieve algunos resultados a nivel global y a nivel sectorial.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[spa] El estudio analiza la evolución de los gases de efecto invernadero (GEI) y las emisiones de acidificación para Italia durante el periodo 1995-2005. Los datos muestran que mientras las emisiones que contribuyen a la acidificación han disminuido constantemente, las emisiones de GEI han aumentado debido al aumento de dióxido de carbono. El objetivo de este estudio es poner de relieve cómo diferentes factores económicos, en particular el crecimiento económico, el desarrollo de una tecnología menos contaminante y la estructura del consumo, han impulsado la evolución de las emisiones. La metodología propuesta es un análisis de descomposición estructural (ADE), método que permite descomponer los cambios de la variable de interés entre las diferentes fuerzas y revelar la importancia de cada factor. Por otra parte, este estudio considera la importancia del comercio internacional e intenta incluir el “problema de la responsabilidad”. Es decir, a través de las relaciones comerciales internacionales, un país podría estar exportando procesos de producción contaminantes sin una reducción real de la contaminación implícita en su patrón de consumo. Con este fin, siguiendo primero un enfoque basado en la “responsabilidad del productor”, el ADE se aplica a las emisiones causadas por la producción nacional. Sucesivamente, el análisis se mueve hacia un enfoque basado en la “responsabilidad del consumidor" y la descomposición se aplica a las emisiones relacionadas con la producción nacional o la producción extranjera que satisface la demanda interna. De esta manera, el ejercicio permite una primera comprobación de la importancia del comercio internacional y pone de relieve algunos resultados a nivel global y a nivel sectorial.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of nuclear hormone receptor antagonists that directly inhibit the association of the receptor with its essential coactivators would allow useful manipulation of nuclear hormone receptor signaling. We previously identified 3-(dibutylamino)-1-(4-hexylphenyl)-propan-1-one (DHPPA), an aromatic β-amino ketone that inhibits coactivator recruitment to thyroid hormone receptor β (TRβ), in a high-throughput screen. Initial evidence suggested that the aromatic β-enone 1-(4-hexylphenyl)-prop-2-en-1-one (HPPE), which alkylates a specific cysteine residue on the TRβ surface, is liberated from DHPPA. Nevertheless, aspects of the mechanism and specificity of action of DHPPA remained unclear. Here, we report an x-ray structure of TRβ with the inhibitor HPPE at 2.3-Å resolution. Unreacted HPPE is located at the interface that normally mediates binding between TRβ and its coactivator. Several lines of evidence, including experiments with TRβ mutants and mass spectroscopic analysis, showed that HPPE specifically alkylates cysteine residue 298 of TRβ, which is located near the activation function-2 pocket. We propose that this covalent adduct formation proceeds through a two-step mechanism: 1) β-elimination to form HPPE; and 2) a covalent bond slowly forms between HPPE and TRβ. DHPPA represents a novel class of potent TRβ antagonist, and its crystal structure suggests new ways to design antagonists that target the assembly of nuclear hormone receptor gene-regulatory complexes and block transcription.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many of the reproductive disorders that emerge in adulthood have their origin during fetal development. Numerous studies have demonstrated that exposure to endocrine disrupting chemicals can permanently affect the reproductive health of experimental animals. In mammals, male sexual differentiation and development are androgen-dependent processes. In rat, the critical programming window for masculinization occurs between embryonic days (EDs) 15.5 and 19.5. Disorders in sex steroid balance during fetal life can disturb the development of the male reproductive tract. In addition to the fetal testis, the adrenal cortex starts to produce steroid hormones before birth. Glucocorticoids produced by the adrenal cortex are essential for preparing the fetus for birth. In the present study, the effects of exposure to endocrine disrupters on fetal male rat testicular and adrenal development were investigated. To differentiate the systemic and direct testicular effects of endocrine disrupters, both in vivo and in vitro experiments were performed. The present study also clarified the role of desert hedgehog signalling (Dhh) in the development of the testis. The results indicate that endocrine disrupters, diethylstilbestrol (DES) and flutamide, are able to induce rapid steroidogenic changes in fetal rat testis under in vitro conditions. Although in utero exposure to these chemicals did not show overt effects in fetal testis, they can induce permanent changes in the developing testis and accessory sex organs later in life. We also reported that exposure to antiandrogens can interfere with testicular Dhh signalling and result in impaired differentiation of the fetal Leydig cells and subsequently lead to abnormal testicular development and sexual differentiation. In utero exposure to tetrachlorodibenzo-p-dioxin (TCDD) caused direct testicular and pituitary effects on the fetal male rat but with different dose responses. In a study in which the effects of developmental exposure to environmental antiandrogens, di-isononylphthalate and 1,1-dichloro-2,2-bis(p-chlorophenyl)ethylene (p,p-DDE), on fetal male rat steroidogenesis were investigated, chemicals did not down-regulate testicular or adrenal steroid hormone synthesis or production in 19.5-day-old fetal rats. However, p,p-DDE-treatment caused clear histological and ultrastructural changes in the prenatal testis and adrenal gland. These structural alterations can disturb the development and function of fetal testis and adrenal gland that may become evident later in life. Exposure to endocrine disrupters during fetal life can cause morphological abnormalities and alter steroid hormone production by fetal rat Leydig cells and adrenocortical cells. These changes may contribute to the maldevelopment of the testis and the adrenal gland. The present study highlights the importance of the fetal period as a sensitive window for endocrine disruption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Russia has been one of the fastest developing economic areas in the world. Based on the GDP, the Russian economy grew evenly since the crisis in 1998 up till 2008. The growth in the gross domestic product has annually been some 5–10%. In 2007, the growth reached 8.1%, which is the highest figure after the 10% growth in 2000. Due to the growth of the economy and wage levels, purchasing power and consumption have been strongly increasing. The growing consumption has especially increased the imports of durables, such as passenger cars, domestic appliances and electronics. The Russian ports and infrastructure have not been able to satisfy the growing needs of exports and imports, which is why quite a large share of Russian foreign trade is going through third countries as transit transports. Finnish ports play a major role in transit transports to and from Russia. About 15% of the total value of Russian imports was transported through Finland in 2008. The economic recession that started in autumn 2008 and continues to date has had an impact on the economic development of Russia. The export income has decreased, mainly due to the reduced world market prices of energy products (oil and gas) and raw minerals. Investments have been postponed, getting credit is more difficult than before, and the ruble has weakened in relation to the euro and the dollar. The imports are decreasing remarkably, and are not forecast to reach the 2008 volumes even in 2012. The economic crisis is reflected in Finland's transit traffic. The volume of goods transported through Finland to and from Russia has decreased almost in the same proportion as the imports of goods to Russia. The biggest risk threatening the development of the Russian economy over long term is its dependence on export income from oil, gas, metals, minerals and forest products, as well as the trends of the world market prices of these products. Nevertheless, it is expected that the GDP of Russia will start to grow again in the forthcoming years due to the increased demand for energy products and raw minerals in the world. At the same time, it is obvious that the world market prices of these products will go up with the increasing demand. The increased income from exports will lead to a growth of imports, especially those of consumer goods, as the living standard of Russian citizens rises. The forecasts produced by the Russian Government concerning the economic development of Russia up till 2030 also indicate a shift in exported goods from raw materials to processed products, which together with energy products will become the main export goods of Russia. As a consequence, Russia may need export routes through third countries, which can be seen as an opportunity for increased transit transports through the ports of Finland. The ports competing with the ports of Finland for Russian foreign trade traffic are the Russian Baltic Sea ports and the ports of the Baltic countries. The strongest competitors are the Baltic Sea ports handling containers. On the Russian Baltic Sea, these ports include Saint Petersburg, Kaliningrad and, in the near future, the ports of Ust-Luga and possibly Vyborg. There are plans to develop Ust-Luga and Vyborg as modern container ports, which would become serious competitors to the Finnish ports. Russia is aiming to redirect as large a share as possible of foreign trade traffic to its own ports. The ports of Russia and the infrastructure associated with them are under constant development. On the other hand, the logistic capacity of Russia is not able to satisfy the continually growing needs of the Russian foreign trade. The capacity problem is emphasized by a structural incompatibility between the exports and imports in the Russian foreign trade. Russian exports can only use a small part of the containers brought in with imports. Problems are also caused by the difficult ice conditions and narrow waterways leading to the ports. It is predicted that Finland will maintain its position as a transit route for the Russian foreign trade, at least in the near future. The Russian foreign trade is increasing, and Russia will not be able to develop its ports in proportion with the increasing foreign trade. With the development of port capacity, cargo flows through the ports of Russia will grow. Structural changes in transit traffic are already visible. Firms are more and more relocating their production to Russia, for example as regards the assembly of cars and warehousing services. Simultaneously, an increasing part of transit cargoes are sent directly to Russia without unloading and reloading in Finland. New product groups have nevertheless been transported through Finland (textile products and tools), replacing the lost cargos. The global recession that started in autumn 2008 has influenced the volume of Russian imports and, consequently, the transit volumes of Finland, but the recession is not expected to be of long duration, and will thus only have a short-term impact on transit volumes. The Finnish infrastructure and services offered by the logistic chain should also be ready to react to the changes in imported product groups as well as to the change in Russian export products in the future. If the development plans of the Russian economy are realized, export products will be more refined, and the share of energy and raw material products will decrease. The other notable factor to be taken into consideration is the extremely fast-changing business environment in Russia. Operators in the logistic chain should be flexible enough to adapt to all kinds of changes to capitalise on business opportunities offered by the Russian foreign trade for the companies and for the transit volumes of Finnish ports, also in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Structural studies of proteins aim at elucidating the atomic details of molecular interactions in biological processes of living organisms. These studies are particularly important in understanding structure, function and evolution of proteins and in defining their roles in complex biological settings. Furthermore, structural studies can be used for the development of novel properties in biomolecules of environmental, industrial and medical importance. X-ray crystallography is an invaluable tool to obtain accurate and precise information about the structure of proteins at the atomic level. Glutathione transferases (GSTs) are amongst the most versatile enzymes in nature. They are able to catalyze a wide variety of conjugation reactions between glutathione (GSH) and non-polar components containing an electrophilic carbon, nitrogen or sulphur atom. Plant GSTs from the Tau class (a poorly characterized class) play an important role in the detoxification of xenobiotics and stress tolerance. Structural studies were performed on a Tau class fluorodifen-inducible glutathione transferase from Glycine max (GmGSTU4-4) complexed with GSH (2.7 Å) and a product analogue Nb-GSH (1.7 Å). The three-dimensional structure of the GmGSTU4-4-GSH complex revealed that GSH binds in different conformations in the two subunits of the dimer: in an ionized form in one subunit and a non-ionized form in the second subunit. Only the ionized form of the substrate may lead to the formation of a catalytically competent complex. Structural comparison between the GSH and Nb-GSH bound complexes revealed significant differences with respect to the hydrogen-bonding, electrostatic interaction pattern, the upper part of -helix H4 and the C-terminus of the enzyme. These differences indicate an intrasubunit modulation between the G-and Hsites suggesting an induced-fit mechanism of xenobiotic substrate binding. A novel binding site on the surface of the enzyme was also revealed. Bacterial type-II L-asparaginases are used in the treatment of haematopoietic diseases such as acute lymphoblastic leukaemia (ALL) and lymphomas due to their ability to catalyze the conversion of L-asparagine to L-aspartate and ammonia. Escherichia coli and Erwinia chrysanthemi asparaginases are employed for the treatment of ALL for over 30 years. However, serious side-effects affecting the liver and pancreas have been observed due to the intrinsic glutaminase activity of the administered enzymes. Structural studies on Helicobacter pylori L-asparaginase (HpA) were carried out in an effort to discover novel L-asparaginases with potential chemotherapeutic utility in ALL treatment. Detailed analysis of the active site geometry revealed structurally significant differences between HpA and other Lasparaginases that may be important for the biological activities of the enzyme and could be further exploited in protein engineering efforts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software systems are expanding and becoming increasingly present in everyday activities. The constantly evolving society demands that they deliver more functionality, are easy to use and work as expected. All these challenges increase the size and complexity of a system. People may not be aware of a presence of a software system, until it malfunctions or even fails to perform. The concept of being able to depend on the software is particularly significant when it comes to the critical systems. At this point quality of a system is regarded as an essential issue, since any deficiencies may lead to considerable money loss or life endangerment. Traditional development methods may not ensure a sufficiently high level of quality. Formal methods, on the other hand, allow us to achieve a high level of rigour and can be applied to develop a complete system or only a critical part of it. Such techniques, applied during system development starting at early design stages, increase the likelihood of obtaining a system that works as required. However, formal methods are sometimes considered difficult to utilise in traditional developments. Therefore, it is important to make them more accessible and reduce the gap between the formal and traditional development methods. This thesis explores the usability of rigorous approaches by giving an insight into formal designs with the use of graphical notation. The understandability of formal modelling is increased due to a compact representation of the development and related design decisions. The central objective of the thesis is to investigate the impact that rigorous approaches have on quality of developments. This means that it is necessary to establish certain techniques for evaluation of rigorous developments. Since we are studying various development settings and methods, specific measurement plans and a set of metrics need to be created for each setting. Our goal is to provide methods for collecting data and record evidence of the applicability of rigorous approaches. This would support the organisations in making decisions about integration of formal methods into their development processes. It is important to control the software development, especially in its initial stages. Therefore, we focus on the specification and modelling phases, as well as related artefacts, e.g. models. These have significant influence on the quality of a final system. Since application of formal methods may increase the complexity of a system, it may impact its maintainability, and thus quality. Our goal is to leverage quality of a system via metrics and measurements, as well as generic refinement patterns, which are applied to a model and a specification. We argue that they can facilitate the process of creating software systems, by e.g. controlling complexity and providing the modelling guidelines. Moreover, we find them as additional mechanisms for quality control and improvement, also for rigorous approaches. The main contribution of this thesis is to provide the metrics and measurements that help in assessing the impact of rigorous approaches on developments. We establish the techniques for the evaluation of certain aspects of quality, which are based on structural, syntactical and process related characteristics of an early-stage development artefacts, i.e. specifications and models. The presented approaches are applied to various case studies. The results of the investigation are juxtaposed with the perception of domain experts. It is our aspiration to promote measurements as an indispensable part of quality control process and a strategy towards the quality improvement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The environmental aspect of corporate social responsibility (CSR) expressed through the process of the EMS implementation in the oil and gas companies is identified as the main subject of this research. In the theoretical part, the basic attention is paid to justification of a link between CSR and environmental management. The achievement of sustainable competitive advantage as a result of environmental capital growth and inclusion of the socially responsible activities in the corporate strategy is another issue that is of special significance here. Besides, two basic forms of environmental management systems (environmental decision support systems and environmental information management systems) are explored and their role in effective stakeholder interaction is tackled. The most crucial benefits of EMS are also analyzed to underline its importance as a source of sustainable development. Further research is based on the survey of 51 sampled oil and gas companies (both publicly owned and state owned ones) originated from different countries all over the world and providing reports on sustainability issues in the open access. To analyze their approach to sustainable development, a specifically designed evaluation matrix with 37 indicators developed in accordance with the General Reporting Initiative (GRI) guidelines for non-financial reporting was prepared. Additionally, the quality of environmental information disclosure was measured on the basis of a quality – quantity matrix. According to results of research, oil and gas companies prefer implementing reactive measures to the costly and knowledge-intensive proactive techniques for elimination of the negative environmental impacts. Besides, it was identified that the environmental performance disclosure is mostly rather limited, so that the quality of non-financial reporting can be judged as quite insufficient. In spite of the fact that most of the oil and gas companies in the sample claim the EMS to be embedded currently in their structure, they often do not provide any details for the process of their implementation. As a potential for the further development of EMS, author mentions possible integration of their different forms in a single entity, extension of existing structure on the basis of consolidation of the structural and strategic precautions as well as development of a unified certification standard instead of several ones that exist today in order to enhance control on the EMS implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability to recognize potential knowledge and convert it into business opportunities is one of the key factors of renewal in uncertain environments. This thesis examines absorptive capacity in the context of non-research and development innovation, with a primary focus on the social interaction that facilitates the absorption of knowledge. It proposes that everyone is and should be entitled to take part in the social interaction that shapes individual observations into innovations. Both innovation and absorptive capacity have been traditionally related to research and development departments and institutions. These innovations need to be adopted and adapted by others. This so-called waterfall model of innovations is only one aspect of new knowledge generation and innovation. In addition to this Science–Technology–Innovation perspective, more attention has been recently paid to the Doing–Using–Interacting mode of generating new knowledge and innovations. The amount of literature on absorptive capacity is vast, yet the concept is reified. The greater part of the literature links absorptive capacity to research and development departments. Some publications have focused on the nature of absorptive capacity in practice and the role of social interaction in enhancing it. Recent literature on absorptive capacity calls for studies that shed light on the relationship between individual absorptive capacity and organisational absorptive capacity. There has also been a call to examine absorptive capacity in non-research and development environments. Drawing on the literature on employee-driven innovation and social capital, this thesis looks at how individual observations and ideas are converted into something that an organisation can use. The critical phases of absorptive capacity, during which the ideas of individuals are incorporated into a group context, are assimilation and transformation. These two phases are seen as complementary: whereas assimilation is the application of easy-to-accept knowledge, transformation challenges the current way of thinking. The two require distinct kinds of social interaction and practices. The results of this study can been crystallised thus: “Enhancing absorptive capacity in practicebased non-research and development context is to organise the optimal circumstances for social interaction. Every individual is a potential source of signals leading to innovations. The individual, thus, recognises opportunities and acquires signals. Through the social interaction processes of assimilation and transformation, these signals are processed into the organisation’s reality and language. The conditions of creative social capital facilitate the interplay between assimilation and transformation. An organisation that strives for employee-driven innovation gains the benefits of a broader surface for opportunity recognition and faster absorption.” If organisations and managers become more aware of the benefits of enhancing absorptive capacity in practice, they have reason to assign resources to those practices that facilitate the creation of absorptive capacity. By recognising the underlying social mechanisms and structural features that lead either to assimilation or transformation, it is easier to balance between renewal and effective operations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CHARGE syndrome, Sotos syndrome and 3p deletion syndrome are examples of rare inherited syndromes that have been recognized for decades but for which the molecular diagnostics only have been made possible by recent advances in genomic research. Despite these advances, development of diagnostic tests for rare syndromes has been hindered by diagnostic laboratories having limited funds for test development, and their prioritization of tests for which a (relatively) high demand can be expected. In this study, the molecular diagnostic tests for CHARGE syndrome and Sotos syndrome were developed, resulting in their successful translation into routine diagnostic testing in the laboratory of Medical Genetics (UTUlab). In the CHARGE syndrome group, mutation was identified in 40.5% of the patients and in the Sotos syndrome group, in 34%, reflecting the use of the tests in routine diagnostics in differential diagnostics. In CHARGE syndrome, the low prevalence of structural aberrations was also confirmed. In 3p deletion syndrome, it was shown that small terminal deletions are not causative for the syndrome, and that testing with arraybased analysis provides a reliable estimate of the deletion size but benign copy number variants complicate result interpretation. During the development of the tests, it was discovered that finding an optimal molecular diagnostic strategy for a given syndrome is always a compromise between the sensitivity, specificity and feasibility of applying a new method. In addition, the clinical utility of the test should be considered prior to test development: sometimes a test performing well in a laboratory has limited utility for the patient, whereas a test performing poorly in the laboratory may have a great impact on the patient and their family. At present, the development of next generation sequencing methods is changing the concept of molecular diagnostics of rare diseases from single tests towards whole-genome analysis.