38 resultados para non-ideal power sources
Resumo:
Le texte «De la parole dialogale» écrit par Lev Jakubinskij est considéré par un certain nombre de chercheurs comme la source principale de la conception du dialogue chez Valentin Volochinov. Cet article remet en question la légitimité de cette thèse. L'analyse détaillée des notions de dialogue élaborées par Jakubinskij et Volochinov montre que leurs bases théoriques ne coïncident pas. Si Jakubinskij s'appuie sur la psychologie dite objective (la réflexologie), Volochinov élabore sa conception du dialogue sur une base sociologique. Les travaux des sociologues marxistes et non marxistes constituent la source principale de sa notion de dialogue
Resumo:
The traditionally coercive and state-controlled governance of protected areas for nature conservation in developing countries has in many cases undergone change in the context of widespread decentralization and liberalization. This article examines an emerging "mixed" (coercive, community- and market-oriented) conservation approach in managed-resource protected areas and its effects on state power through a case study on forest protection in the central Indian state of Madhya Pradesh. The findings suggest that imperfect decentralization and partial liberalization resulted in changed forms, rather than uniform loss, of state power. A forest co-management program paradoxically strengthened local capacity and influence of the Forest Department, which generally maintained its territorial and knowledge-based control over forests and timber management. Furthermore, deregulation and reregulation enabled the state to withdraw from uneconomic activities but also implied reduced place-based control of non-timber forest products. Generally, the new policies and programs contributed to the separation of livelihoods and forests in Madhya Pradesh. The article concludes that regulatory, community- and market-based initiatives would need to be better coordinated to lead to more effective nature conservation and positive livelihood outcomes.
Resumo:
RESUME La méthode de la spectroscopie Raman est une technique d'analyse chimique basée sur l'exploitation du phénomène de diffusion de la lumière (light scattering). Ce phénomène fut observé pour la première fois en 1928 par Raman et Krishnan. Ces observations permirent à Raman d'obtenir le Prix Nobel en physique en 1930. L'application de la spectroscopie Raman a été entreprise pour l'analyse du colorant de fibres textiles en acrylique, en coton et en laine de couleurs bleue, rouge et noire. Nous avons ainsi pu confirmer que la technique est adaptée pour l'analyse in situ de traces de taille microscopique. De plus, elle peut être qualifiée de rapide, non destructive et ne nécessite aucune préparation particulière des échantillons. Cependant, le phénomène de la fluorescence s'est révélé être l'inconvénient le plus important. Lors de l'analyse des fibres, différentes conditions analytiques ont été testées et il est apparu qu'elles dépendaient surtout du laser choisi. Son potentiel pour la détection et l'identification des colorants imprégnés dans les fibres a été confirmé dans cette étude. Une banque de données spectrale comprenant soixante colorants de référence a été réalisée dans le but d'identifier le colorant principal imprégné dans les fibres collectées. De plus, l'analyse de différents blocs de couleur, caractérisés par des échantillons d'origine inconnue demandés à diverses personnes, a permis de diviser ces derniers en plusieurs groupes et d'évaluer la rareté des configurations des spectres Raman obtenus. La capacité de la technique Raman à différencier ces échantillons a été évaluée et comparée à celle des méthodes conventionnelles pour l'analyse des fibres textiles, à savoir la micro spectrophotométrie UV-Vis (MSP) et la chromatographie sur couche mince (CCM). La technique Raman s'est révélée être moins discriminatoire que la MSP pour tous les blocs de couleurs considérés. C'est pourquoi dans le cadre d'une séquence analytique nous recommandons l'utilisation du Raman après celle de la méthode d'analyse de la couleur, à partir d'un nombre de sources lasers le plus élevé possible. Finalement, la possibilité de disposer d'instruments équipés avec plusieurs longueurs d'onde d'excitation, outre leur pouvoir de réduire la fluorescence, permet l'exploitation d'un plus grand nombre d'échantillons. ABSTRACT Raman spectroscopy allows for the measurement of the inelastic scattering of light due to the vibrational modes of a molecule when irradiated by an intense monochromatic source such as a laser. Such a phenomenon was observed for the first time by Raman and Krishnan in 1928. For this observation, Raman was awarded with the Nobel Prize in Physics in 1930. The application of Raman spectroscopy has been undertaken for the dye analysis of textile fibers. Blue, black and red acrylics, cottons and wools were examined. The Raman technique presents advantages such as non-destructive nature, fast analysis time, and the possibility of performing microscopic in situ analyses. However, the problem of fluorescence was often encountered. Several aspects were investigated according to the best analytical conditions for every type/color fiber combination. The potential of the technique for the detection and identification of dyes was confirmed. A spectral database of 60 reference dyes was built to detect the main dyes used for the coloration of fiber samples. Particular attention was placed on the discriminating power of the technique. Based on the results from the Raman analysis for the different blocs of color submitted to analyses, it was possible to obtain different classes of fibers according to the general shape of spectra. The ability of Raman spectroscopy to differentiate samples was compared to the one of the conventional techniques used for the analysis of textile fibers, like UV-Vis Microspectrophotometry (UV-Vis MSP) and thin layer chromatography (TLC). The Raman technique resulted to be less discriminative than MSP for every bloc of color considered in this study. Thus, it is recommended to use Raman spectroscopy after MSP and light microscopy to be considered for an analytical sequence. It was shown that using several laser wavelengths allowed for the reduction of fluorescence and for the exploitation of a higher number of samples.
Resumo:
Summary Landscapes are continuously changing. Natural forces of change such as heavy rainfall and fires can exert lasting influences on their physical form. However, changes related to human activities have often shaped landscapes more distinctly. In Western Europe, especially modern agricultural practices and the expanse of overbuilt land have left their marks in the landscapes since the middle of the 20th century. In the recent years men realised that mare and more changes that were formerly attributed to natural forces might indirectly be the result of their own action. Perhaps the most striking landscape change indirectly driven by human activity we can witness in these days is the large withdrawal of Alpine glaciers. Together with the landscapes also habitats of animal and plant species have undergone vast and sometimes rapid changes that have been hold responsible for the ongoing loss of biodiversity. Thereby, still little knowledge is available about probable effects of the rate of landscape change on species persistence and disappearance. Therefore, the development and speed of land use/land cover in the Swiss communes between the 1950s and 1990s were reconstructed using 10 parameters from agriculture and housing censuses, and were further correlated with changes in butterfly species occurrences. Cluster analyses were used to detect spatial patterns of change on broad spatial scales. Thereby, clusters of communes showing similar changes or transformation rates were identified for single decades and put into a temporally dynamic sequence. The obtained picture on the changes showed a prevalent replacement of non-intensive agriculture by intensive practices, a strong spreading of urban communes around city centres, and transitions towards larger farm sizes in the mountainous areas. Increasing transformation rates toward more intensive agricultural managements were especially found until the 1970s, whereas afterwards the trends were commonly negative. However, transformation rates representing the development of residential buildings showed positive courses at any time. The analyses concerning the butterfly species showed that grassland species reacted sensitively to the density of livestock in the communes. This might indicate the augmented use of dry grasslands as cattle pastures that show altered plant species compositions. Furthermore, these species also decreased in communes where farms with an agricultural area >5ha have disappeared. The species of the wetland habitats were favoured in communes with smaller fractions of agricultural areas and lower densities of large farms (>10ha) but did not show any correlation to transformation rates. It was concluded from these analyses that transformation rates might influence species disappearance to a certain extent but that states of the environmental predictors might generally outweigh the importance of the corresponding rates. Information on the current distribution of species is evident for nature conservation. Planning authorities that define priority areas for species protection or examine and authorise construction projects need to know about the spatial distribution of species. Hence, models that simulate the potential spatial distribution of species have become important decision tools. The underlying statistical analyses such as the widely used generalised linear models (GLM) often rely on binary species presence-absence data. However, often only species presence data have been colleted, especially for vagrant, rare or cryptic species such as butterflies or reptiles. Modellers have thus introduced randomly selected absence data to design distribution models. Yet, selecting false absence data might bias the model results. Therefore, we investigated several strategies to select more reliable absence data to model the distribution of butterfly species based on historical distribution data. The results showed that better models were obtained when historical data from longer time periods were considered. Furthermore, model performance was additionally increased when long-term data of species that show similar habitat requirements as the modelled species were used. This successful methodological approach was further applied to assess consequences of future landscape changes on the occurrence of butterfly species inhabiting dry grasslands or wetlands. These habitat types have been subjected to strong deterioration in the recent decades, what makes their protection a future mission. Four spatially explicit scenarios that described (i) ongoing land use changes as observed between 1985 and 1997, (ii) liberalised agricultural markets, and (iii) slightly and (iv) strongly lowered agricultural production provided probable directions of landscape change. Current species-environment relationships were derived from a statistical model and used to predict future occurrence probabilities in six major biogeographical regions in Switzerland, comprising the Jura Mountains, the Plateau, the Northern and Southern Alps, as well as the Western and Eastern Central Alps. The main results were that dry grasslands species profited from lowered agricultural production, whereas overgrowth of open areas in the liberalisation scenario might impair species occurrence. The wetland species mostly responded with decreases in their occurrence probabilities in the scenarios, due to a loss of their preferred habitat. Further analyses about factors currently influencing species occurrences confirmed anthropogenic causes such as urbanisation, abandonment of open land, and agricultural intensification. Hence, landscape planning should pay more attention to these forces in areas currently inhabited by these butterfly species to enable sustainable species persistence. In this thesis historical data were intensively used to reconstruct past developments and to make them useful for current investigations. Yet, the availability of historical data and the analyses on broader spatial scales has often limited the explanatory power of the conducted analyses. Meaningful descriptors of former habitat characteristics and abundant species distribution data are generally sparse, especially for fine scale analyses. However, this situation can be ameliorated by broadening the extent of the study site and the used grain size, as was done in this thesis by considering the whole of Switzerland with its communes. Nevertheless, current monitoring projects and data recording techniques are promising data sources that might allow more detailed analyses about effects of long-term species reactions on landscape changes in the near future. This work, however, also showed the value of historical species distribution data as for example their potential to locate still unknown species occurrences. The results might therefore contribute to further research activities that investigate current and future species distributions considering the immense richness of historical distribution data. Résumé Les paysages changent continuellement. Des farces naturelles comme des pluies violentes ou des feux peuvent avoir une influence durable sur la forme du paysage. Cependant, les changements attribués aux activités humaines ont souvent modelé les paysages plus profondément. Depuis les années 1950 surtout, les pratiques agricoles modernes ou l'expansion des surfaces d'habitat et d'infrastructure ont caractérisé le développement du paysage en Europe de l'Ouest. Ces dernières années, l'homme a commencé à réaliser que beaucoup de changements «naturels » pourraient indirectement résulter de ses propres activités. Le changement de paysage le plus apparent dont nous sommes témoins de nos jours est probablement l'immense retraite des glaciers alpins. Avec les paysages, les habitats des animaux et des plantes ont aussi été exposés à des changements vastes et quelquefois rapides, tenus pour coresponsable de la continuelle diminution de la biodiversité. Cependant, nous savons peu des effets probables de la rapidité des changements du paysage sur la persistance et la disparition des espèces. Le développement et la rapidité du changement de l'utilisation et de la couverture du sol dans les communes suisses entre les années 50 et 90 ont donc été reconstruits au moyen de 10 variables issues des recensements agricoles et résidentiels et ont été corrélés avec des changements de présence des papillons diurnes. Des analyses de groupes (Cluster analyses) ont été utilisées pour détecter des arrangements spatiaux de changements à l'échelle de la Suisse. Des communes avec des changements ou rapidités comparables ont été délimitées pour des décennies séparées et ont été placées en séquence temporelle, en rendrent une certaine dynamique du changement. Les résultats ont montré un remplacement répandu d'une agriculture extensive des pratiques intensives, une forte expansion des faubourgs urbains autour des grandes cités et des transitions vers de plus grandes surfaces d'exploitation dans les Alpes. Dans le cas des exploitations agricoles, des taux de changement croissants ont été observés jusqu'aux années 70, alors que la tendance a généralement été inversée dans les années suivantes. Par contre, la vitesse de construction des nouvelles maisons a montré des courbes positives pendant les 50 années. Les analyses sur la réaction des papillons diurnes ont montré que les espèces des prairies sèches supportaient une grande densité de bétail. Il est possible que dans ces communes beaucoup des prairies sèches aient été fertilisées et utilisées comme pâturages, qui ont une autre composition floristique. De plus, les espèces ont diminué dans les communes caractérisées par une rapide perte des fermes avec une surface cultivable supérieure à 5 ha. Les espèces des marais ont été favorisées dans des communes avec peu de surface cultivable et peu de grandes fermes, mais n'ont pas réagi aux taux de changement. Il en a donc été conclu que la rapidité des changements pourrait expliquer les disparitions d'espèces dans certains cas, mais que les variables prédictives qui expriment des états pourraient être des descripteurs plus importants. Des informations sur la distribution récente des espèces sont importantes par rapport aux mesures pour la conservation de la nature. Pour des autorités occupées à définir des zones de protection prioritaires ou à autoriser des projets de construction, ces informations sont indispensables. Les modèles de distribution spatiale d'espèces sont donc devenus des moyens de décision importants. Les méthodes statistiques courantes comme les modèles linéaires généralisés (GLM) demandent des données de présence et d'absence des espèces. Cependant, souvent seules les données de présence sont disponibles, surtout pour les animaux migrants, rares ou cryptiques comme des papillons ou des reptiles. C'est pourquoi certains modélisateurs ont choisi des absences au hasard, avec le risque d'influencer le résultat en choisissant des fausses absences. Nous avons établi plusieurs stratégies, basées sur des données de distribution historique des papillons diurnes, pour sélectionner des absences plus fiables. Les résultats ont démontré que de meilleurs modèles pouvaient être obtenus lorsque les données proviennent des périodes de temps plus longues. En plus, la performance des modèles a pu être augmentée en considérant des données de distribution à long terme d'espèces qui occupent des habitats similaires à ceux de l'espèce cible. Vu le succès de cette stratégie, elle a été utilisée pour évaluer les effets potentiels des changements de paysage futurs sur la distribution des papillons des prairies sèches et marais, deux habitats qui ont souffert de graves détériorations. Quatre scénarios spatialement explicites, décrivant (i) l'extrapolation des changements de l'utilisation de sol tels qu'observés entre 1985 et 1997, (ii) la libéralisation des marchés agricoles, et une production agricole (iii) légèrement amoindrie et (iv) fortement diminuée, ont été utilisés pour générer des directions de changement probables. Les relations actuelles entre la distribution des espèces et l'environnement ont été déterminées par le biais des modèles statistiques et ont été utilisées pour calculer des probabilités de présence selon les scénarios dans six régions biogéographiques majeures de la Suisse, comportant le Jura, le Plateau, les Alpes du Nord, du Sud, centrales orientales et centrales occidentales. Les résultats principaux ont montré que les espèces des prairies sèches pourraient profiter d'une diminution de la production agricole, mais qu'elles pourraient aussi disparaître à cause de l'embroussaillement des terres ouvertes dû à la libéralisation des marchés agricoles. La probabilité de présence des espèces de marais a décrû à cause d'une perte générale des habitats favorables. De plus, les analyses ont confirmé que des causes humaines comme l'urbanisation, l'abandon des terres ouvertes et l'intensification de l'agriculture affectent actuellement ces espèces. Ainsi ces forces devraient être mieux prises en compte lors de planifications paysagères, pour que ces papillons diurnes puissent survivre dans leurs habitats actuels. Dans ce travail de thèse, des données historiques ont été intensivement utilisées pour reconstruire des développements anciens et pour les rendre utiles à des recherches contemporaines. Cependant, la disponibilité des données historiques et les analyses à grande échelle ont souvent limité le pouvoir explicatif des analyses. Des descripteurs pertinents pour caractériser les habitats anciens et des données suffisantes sur la distribution des espèces sont généralement rares, spécialement pour des analyses à des échelles fores. Cette situation peut être améliorée en augmentant l'étendue du site d'étude et la résolution, comme il a été fait dans cette thèse en considérant toute la Suisse avec ses communes. Cependant, les récents projets de surveillance et les techniques de collecte de données sont des sources prometteuses, qui pourraient permettre des analyses plus détaillés sur les réactions à long terme des espèces aux changements de paysage dans le futur. Ce travail a aussi montré la valeur des anciennes données de distribution, par exemple leur potentiel pour aider à localiser des' présences d'espèces encore inconnues. Les résultats peuvent contribuer à des activités de recherche à venir, qui étudieraient les distributions récentes ou futures d'espèces en considérant l'immense richesse des données de distribution historiques.
Resumo:
This study aimed to investigate the effects on a possible improvement in aerobic and anaerobic performance of oral terbutaline (TER) at a supra-therapeutic dose in 7 healthy competitive male athletes. On day 1, ventilatory threshold, maximum oxygen uptake [Formula: see text] and corresponding power output were measured and used to determine the exercise load on days 2 and 3. On days 2 and 3, 8 mg of TER or placebo were orally administered in a double-blind process to athletes who rested for 3 h, and then performed a battery of tests including a force-velocity exercise test, running sprint and a maximal endurance cycling test at Δ50 % (50 % between VT and [Formula: see text]). Lactatemia, anaerobic parameters and endurance performance ([Formula: see text] and time until exhaustion) were raised during the corresponding tests. We found that TER administration did not improve any of the parameters of aerobic performance (p > 0.05). In addition, no change in [Formula: see text] kinetic parameters was found with TER compared to placebo (p > 0.05). Moreover, no enhancement of the force-velocity relationship was observed during sprint exercises after TER intake (p > 0.05) and, on the contrary, maximal strength decreased significantly after TER intake (p < 0.05) but maximal power remained unchanged (p > 0.05). In conclusion, oral acute administration of TER at a supra-therapeutic dose seems to be without any relevant ergogenic effect on anaerobic and aerobic performances in healthy athletes. However, all participants experienced adverse side effects such as tremors.
Resumo:
The opportunistic ubiquitous pathogen Pseudomonas aeruginosa strain PAOl is a versatile Gram-negative bacterium that has the extraordinary capacity to colonize a wide diversity of ecological niches and to cause severe and persistent infections in humans. To ensure an optimal coordination of the genes involved in nutrient utilization, this bacterium uses the NtrB/C and/or the CbrA/B two-component systems, to sense nutrients availability and to regulate in consequence the expression of genes involved in their uptake and catabolism. NtrB/C is specialized in nitrogen utilization, while the CbrA/B system is involved in both carbon and nitrogen utilization and both systems activate their target genes expression in concert with the alternative sigma factor RpoN. Moreover, the NtrB/C and CbrA/B two- component systems regulate the secondary metabolism of the bacterium, such as the production of virulence factors. In addition to the fine-tuning transcriptional regulation, P. aeruginosa can rapidly modulate its metabolism using small non-coding regulatory RNAs (sRNAs), which regulate gene expression at the post-transcriptional level by diverse and sophisticated mechanisms and contribute to the fast physiological adaptability of this bacterium. In our search for novel RpoN-dependent sRNAs modulating the nutritional adaptation of P. aeruginosa PAOl, we discovered NrsZ (Nitrogen regulated sRNA), a novel RpoN-dependent sRNA that is induced under nitrogen starvation by the NtrB/C two-component system. NrsZ has a unique architecture, formed of three similar stem-loop structures (SL I, II and II) separated by variant spacer sequences. Moreover, this sRNA is processed in short individual stem-loop molecules, by internal cleavage involving the endoribonuclease RNAse E. Concerning NrsZ functions in P. aeruginosa PAOl, this sRNA was shown to trigger the swarming motility and the rhamnolipid biosurfactants production. This regulation is due to the NrsZ-mediated activation of rhlA expression, a gene encoding for an enzyme essential for swarming motility and rhamnolipids production. Interestingly, the SL I structure of NrsZ ensures its regulatory function on rhlA expression, suggesting that the similar SLs are the functional units of this modular sRNA. However, the regulatory mechanism of action of NrsZ on rhlA expression activation remains unclear and is currently being investigated. Additionally, the NrsZ regulatory network was investigated by a transcriptome analysis, suggesting that numerous genes involved in both primary and secondary metabolism are regulated by this sRNA. To emphasize the importance of NrsZ, we investigated its conservation in other Pseudomonas species and demonstrated that NrsZ is conserved and expressed under nitrogen limitation in Pseudomonas protegens Pf-5, Pseudomonas putida KT2442, Pseudomonas entomophila L48 and Pseudomonas syringae pv. tomato DC3000, strains having different ecological features, suggesting an important role of NrsZ in the adaptation of Pseudomonads to nitrogen starvation. Interestingly the architecture of the different NrsZ homologs is similarly composed by SL structures and variant spacer sequences. However, the number of SL repetitions is not identical, and one to six SLs were predicted on the different NrsZ homologs. Moreover, NrsZ is processed in short molecules in all the strains, similarly to what was previously observed in P. aeruginosa PAOl, and the heterologous expression of the NrsZ homologs restored rhlA expression, swarming motility and rhamnolipids production in the P. aeruginosa NrsZ mutant. In many aspects, NrsZ is an atypical sRNA in the bacterial panorama. To our knowledge, NrsZ is the first described sRNA induced by the NtrB/C. Moreover, its unique modular architecture and its processing in similar short SL molecules suggest that NrsZ belongs to a novel family of bacterial sRNAs. -- L'agent pathogène opportuniste et ubiquitaire Pseudomonas aeruginosa souche PAOl est une bactérie Gram négative versatile ayant l'extraordinaire capacité de coloniser différentes niches écologiques et de causer des infections sévères et persistantes chez l'être humain. Afin d'assurer une coordination optimale des gènes impliqués dans l'utilisation de différents nutriments, cette bactérie se sert de systèmes à deux composants tel que NtrB/C et CbrA/B afin de détecter la disponibilité des ressources nutritives, puis de réguler en conséquence l'expression des gènes impliqués dans leur importation et leur catabolisme. Le système NtrB/C régule l'utilisation des sources d'azote alors que le système CbrA/B est impliqué à la fois dans l'utilisation des sources de carbone et d'azote. Ces deux systèmes activent l'expression de leurs gènes-cibles de concert avec le facteur sigma alternatif RpoN. En outre, NtrB/C et CbrA/B régulent aussi le métabolisme secondaire, contrôlant notamment la production d'importants facteurs de virulence. En plus de toutes ces régulations génétiques fines ayant lieu au niveau transcriptionnel, P. aeruginosa est aussi capable de moduler son métabolisme en se servant de petits ARNs régulateurs non-codants (ARNncs), qui régulent l'expression génétique à un niveau post- transcriptionnel par divers mécanismes sophistiqués et contribuent à rendre particulièrement rapide l'adaptation physiologique de cette bactérie. Au cours de nos recherches sur de nouveaux ARNncs dépendant du facteur sigma RpoN et impliqués dans l'adaptation nutritionnelle de P. aeruginosa PAOl, nous avons découvert NrsZ (Nitrogen regulated sRNA), un ARNnc induit par la cascade NtrB/C-RpoN en condition de carence en azote. NrsZ a une architecture unique, composée de trois structures en tige- boucle (TB I, II et III) hautement similaires et séparées par des « espaceurs » ayant des séquences variables. De plus, cet ARNnc est clivé en petits fragments correspondant au trois molécules en tige-boucle, par un processus de clivage interne impliquant l'endoribonucléase RNase E. Concernant les fonctions de NrsZ chez P. aeruginosa PAOl, cet ARNnc est capable d'induire la motilité de type « swarming » et la production de biosurfactants, nommés rhamnolipides. Cette régulation est due à l'activation par NrsZ de l'expression de rhlA, un gène essentiel pour la motilité de type swarming et pour la production de rhamnolipides. Étonnamment, la structure TB I est capable d'assurer à elle seule la fonction régulatrice de NrsZ sur l'expression de rhlA, suggérant que ces molécules TBs sont les unités fonctionnelles de cet ARNnc modulaire. Cependant, le mécanisme moléculaire par lequel NrsZ active l'expression de rhlA demeure à ce jour incertain et est actuellement à l'étude. En plus, le réseau de régulations médiées par NrsZ a été étudié par une analyse de transcriptome qui a indiqué que de nombreux gènes impliqués dans le métabolisme primaire ou secondaire seraient régulés par NrsZ. Pour accentuer l'importance de NrsZ, nous avons étudié sa conservation dans d'autres espèces de Pseudomonas. Ainsi, nous avons démontré que NrsZ est conservé et exprimé en situation de carence d'azote par les souches Pseudomonas protegens Pf-5, Pseudomonas putida KT2442, Pseudomonas entomophila L48, Pseudomonas syringae pv. tomato DC3000, quatre espèces ayant des caractéristiques écologiques très différentes, suggérant que NrsZ joue un rôle important dans l'adaptation du genre Pseudomonas envers la carence en azote. Chez toutes les souches étudiées, les différents homologues de NrsZ présentent une architecture similaire faite de TBs conservées et d'espaceurs. Cependant, le nombre de TBs n'est pas identique et peut varier de une à six copies selon la souche. Les différentes versions de NrsZ sont clivées en petites molécules dans ces quatre souches, comme il a été observé chez P. aeruginosa PAOl. De plus, l'expression hétérologue des différentes variantes de NrsZ est capable de restaurer l'expression de rhlA, la motilité swarming et la production de rhamnolipides dans une souche de P. aeruginosa dont nrsZ a été inactivé. Par bien des aspects, NrsZ est un ARNnc atypique dans le monde bactérien. À notre connaissance, NrsZ est le premier ARNnc décrit comme étant régulé par le système NtrB/C. De plus, son unique architecture modulaire et son clivage en petites molécules similaires suggèrent que NrsZ appartient à une nouvelle famille d'ARNncs bactériens.
Resumo:
We characterize the value function of maximizing the total discounted utility of dividend payments for a compound Poisson insurance risk model when strictly positive transaction costs are included, leading to an impulse control problem. We illustrate that well known simple strategies can be optimal in the case of exponential claim amounts. Finally we develop a numerical procedure to deal with general claim amount distributions.
Resumo:
BACKGROUND and OBJECTIVE: A non-touch laser-induced microdrilling procedure is studied on mouse zona pellucida (ZP). STUDY DESIGN/MATERIALS and METHODS: A 1.48-microns diode laser beam is focused in a 8-microns spot through a 45x objective of an inverted microscope. Mouse zygotes, suspended in a culture medium, are microdrilled by exposing their ZP to a short laser irradiation and allowed to develop in vitro. RESULTS: Various sharp-edged holes can be generated in the ZP with a single laser irradiation. Sizes can be varied by changing irradiation time (3-100 ms) or laser power (22-55 mW). Drilled zygotes present no signs of thermal damage under light and scanning electron microscopy and develop as expected in vitro, except for a distinct eight-shaped hatching behavior. CONCLUSION: The microdrilling procedure can generate standardized holes in mouse ZP, without any visible side effects. The hole formation can be explained by a local photothermolysis of the protein matrix.
Resumo:
INTRODUCTION: To compare the power spectral changes of the voluntary surface electromyogram (sEMG) and of the compound action potential (M wave) in the vastus medialis and vastus lateralis muscles during fatiguing contractions. METHODS: Interference sEMG and force were recorded during 48 intermittent 3-s isometric maximal voluntary contractions (MVC) from 13 young, healthy subjects. M waves and twitches were evoked using supramaximal femoral nerve stimulation between the successive MVCs. Mean frequency (F mean), and median frequency were calculated from the sEMG and M waves. Muscle fiber conduction velocity (MFCV) was computed by cross-correlation. RESULTS: The power spectral shift to lower frequencies was significantly greater for the voluntary sEMG than for the M waves (P < 0.05). Over the fatiguing protocol, the overall average decrease in MFCV (~25 %) was comparable to that of sEMG F mean (~22 %), but significantly greater than that of M-wave F mean (~9 %) (P < 0.001). The mean decline in MFCV was highly correlated with the mean decreases in both sEMG and M-wave F mean. CONCLUSIONS: The present findings indicated that, as fatigue progressed, central mechanisms could enhance the relative weight of the low-frequency components of the voluntary sEMG power spectrum, and/or the end-of-fiber (non-propagating) components could reduce the sensitivity of the M-wave spectrum to changes in conduction velocity.
Resumo:
This study investigated the contribution of sources and establishment characteristics, on the exposure to fine particulate matter (PM(2.5)) in the non-smoking sections of bars, cafes, and restaurants in central Zurich. PM(2.5)-exposure was determined with a nephelometer. A random sample of hospitality establishments was investigated on all weekdays, from morning until midnight. Each visit lasted 30 min. Numbers of smokers and other sources, such as candles and cooking processes, were recorded, as were seats, open windows, and open doors. Ambient air pollution data were obtained from public authorities. Data were analysed using robust MM regression. Over 14 warm, sunny days, 102 establishments were measured. Average establishment PM(2.5) concentrations were 64.7 microg/m(3) (s.d. = 73.2 microg/m(3), 30-min maximum 452.2 microg/m(3)). PM(2.5) was significantly associated with the number of smokers, percentage of seats occupied by smokers, and outdoor PM. Each smoker increased PM(2.5) on average by 15 microg/m(3). No associations were found with other sources, open doors or open windows. Bars had more smoking guests and showed significantly higher concentrations than restaurants and cafes. Smokers were the most important PM(2.5)-source in hospitality establishments, while outdoor PM defined the baseline. Concentrations are expected to be even higher during colder, unpleasant times of the year. PRACTICAL IMPLICATIONS: Smokers and ambient air pollution are the most important sources of fine airborne particulate matter (PM(2.5)) in the non-smoking sections of bars, restaurants, and cafes. Other sources do not significantly contribute to PM(2.5)-levels, while opening doors and windows is not an efficient means of removing pollutants. First, this demonstrates the impact that even a few smokers can have in affecting particle levels. Second, it implies that creating non-smoking sections, and using natural ventilation, is not sufficient to bring PM(2.5) to levels that imply no harm for employees and non-smoking clients. [Authors]
Resumo:
The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters.A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed.In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements.The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.
Resumo:
The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.
Resumo:
The goal of this study was to investigate the impact of computing parameters and the location of volumes of interest (VOI) on the calculation of 3D noise power spectrum (NPS) in order to determine an optimal set of computing parameters and propose a robust method for evaluating the noise properties of imaging systems. Noise stationarity in noise volumes acquired with a water phantom on a 128-MDCT and a 320-MDCT scanner were analyzed in the spatial domain in order to define locally stationary VOIs. The influence of the computing parameters in the 3D NPS measurement: the sampling distances bx,y,z and the VOI lengths Lx,y,z, the number of VOIs NVOI and the structured noise were investigated to minimize measurement errors. The effect of the VOI locations on the NPS was also investigated. Results showed that the noise (standard deviation) varies more in the r-direction (phantom radius) than z-direction plane. A 25 × 25 × 40 mm(3) VOI associated with DFOV = 200 mm (Lx,y,z = 64, bx,y = 0.391 mm with 512 × 512 matrix) and a first-order detrending method to reduce structured noise led to an accurate NPS estimation. NPS estimated from off centered small VOIs had a directional dependency contrary to NPS obtained from large VOIs located in the center of the volume or from small VOIs located on a concentric circle. This showed that the VOI size and location play a major role in the determination of NPS when images are not stationary. This study emphasizes the need for consistent measurement methods to assess and compare image quality in CT.
Resumo:
Summary This dissertation explores how stakeholder dialogue influences corporate processes, and speculates about the potential of this phenomenon - particularly with actors, like non-governmental organizations (NGOs) and other representatives of civil society, which have received growing attention against a backdrop of increasing globalisation and which have often been cast in an adversarial light by firms - as a source of teaming and a spark for innovation in the firm. The study is set within the context of the introduction of genetically-modified organisms (GMOs) in Europe. Its significance lies in the fact that scientific developments and new technologies are being generated at an unprecedented rate in an era where civil society is becoming more informed, more reflexive, and more active in facilitating or blocking such new developments, which could have the potential to trigger widespread changes in economies, attitudes, and lifestyles, and address global problems like poverty, hunger, climate change, and environmental degradation. In the 1990s, companies using biotechnology to develop and offer novel products began to experience increasing pressure from civil society to disclose information about the risks associated with the use of biotechnology and GMOs, in particular. Although no harmful effects for humans or the environment have been factually demonstrated even to date (2008), this technology remains highly-contested and its introduction in Europe catalysed major companies to invest significant financial and human resources in stakeholder dialogue. A relatively new phenomenon at the time, with little theoretical backing, dialogue was seen to reflect a move towards greater engagement with stakeholders, commonly defined as those "individuals or groups with which. business interacts who have a 'stake', or vested interest in the firm" (Carroll, 1993:22) with whom firms are seen to be inextricably embedded (Andriof & Waddock, 2002). Regarding the organisation of this dissertation, Chapter 1 (Introduction) describes the context of the study, elaborates its significance for academics and business practitioners as an empirical work embedded in a sector at the heart of the debate on corporate social responsibility (CSR). Chapter 2 (Literature Review) traces the roots and evolution of CSR, drawing on Stakeholder Theory, Institutional Theory, Resource Dependence Theory, and Organisational Learning to establish what has already been developed in the literature regarding the stakeholder concept, motivations for engagement with stakeholders, the corporate response to external constituencies, and outcomes for the firm in terms of organisational learning and change. I used this review of the literature to guide my inquiry and to develop the key constructs through which I viewed the empirical data that was gathered. In this respect, concepts related to how the firm views itself (as a victim, follower, leader), how stakeholders are viewed (as a source of pressure and/or threat; as an asset: current and future), corporate responses (in the form of buffering, bridging, boundary redefinition), and types of organisational teaming (single-loop, double-loop, triple-loop) and change (first order, second order, third order) were particularly important in building the key constructs of the conceptual model that emerged from the analysis of the data. Chapter 3 (Methodology) describes the methodology that was used to conduct the study, affirms the appropriateness of the case study method in addressing the research question, and describes the procedures for collecting and analysing the data. Data collection took place in two phases -extending from August 1999 to October 2000, and from May to December 2001, which functioned as `snapshots' in time of the three companies under study. The data was systematically analysed and coded using ATLAS/ti, a qualitative data analysis tool, which enabled me to sort, organise, and reduce the data into a manageable form. Chapter 4 (Data Analysis) contains the three cases that were developed (anonymised as Pioneer, Helvetica, and Viking). Each case is presented in its entirety (constituting a `within case' analysis), followed by a 'cross-case' analysis, backed up by extensive verbatim evidence. Chapter 5 presents the research findings, outlines the study's limitations, describes managerial implications, and offers suggestions for where more research could elaborate the conceptual model developed through this study, as well as suggestions for additional research in areas where managerial implications were outlined. References and Appendices are included at the end. This dissertation results in the construction and description of a conceptual model, grounded in the empirical data and tied to existing literature, which portrays a set of elements and relationships deemed important for understanding the impact of stakeholder engagement for firms in terms of organisational learning and change. This model suggests that corporate perceptions about the nature of stakeholder influence the perceived value of stakeholder contributions. When stakeholders are primarily viewed as a source of pressure or threat, firms tend to adopt a reactive/defensive posture in an effort to manage stakeholders and protect the firm from sources of outside pressure -behaviour consistent with Resource Dependence Theory, which suggests that firms try to get control over extemal threats by focussing on the relevant stakeholders on whom they depend for critical resources, and try to reverse the control potentially exerted by extemal constituencies by trying to influence and manipulate these valuable stakeholders. In situations where stakeholders are viewed as a current strategic asset, firms tend to adopt a proactive/offensive posture in an effort to tap stakeholder contributions and connect the organisation to its environment - behaviour consistent with Institutional Theory, which suggests that firms try to ensure the continuing license to operate by internalising external expectations. In instances where stakeholders are viewed as a source of future value, firms tend to adopt an interactive/innovative posture in an effort to reduce or widen the embedded system and bring stakeholders into systems of innovation and feedback -behaviour consistent with the literature on Organisational Learning, which suggests that firms can learn how to optimize their performance as they develop systems and structures that are more adaptable and responsive to change The conceptual model moreover suggests that the perceived value of stakeholder contribution drives corporate aims for engagement, which can be usefully categorised as dialogue intentions spanning a continuum running from low-level to high-level to very-high level. This study suggests that activities aimed at disarming critical stakeholders (`manipulation') providing guidance and correcting misinformation (`education'), being transparent about corporate activities and policies (`information'), alleviating stakeholder concerns (`placation'), and accessing stakeholder opinion ('consultation') represent low-level dialogue intentions and are experienced by stakeholders as asymmetrical, persuasive, compliance-gaining activities that are not in line with `true' dialogue. This study also finds evidence that activities aimed at redistributing power ('partnership'), involving stakeholders in internal corporate processes (`participation'), and demonstrating corporate responsibility (`stewardship') reflect high-level dialogue intentions. This study additionally finds evidence that building and sustaining high-quality, trusted relationships which can meaningfully influence organisational policies incline a firm towards the type of interactive, proactive processes that underpin the development of sustainable corporate strategies. Dialogue intentions are related to type of corporate response: low-level intentions can lead to buffering strategies; high-level intentions can underpin bridging strategies; very high-level intentions can incline a firm towards boundary redefinition. The nature of corporate response (which encapsulates a firm's posture towards stakeholders, demonstrated by the level of dialogue intention and the firm's strategy for dealing with stakeholders) favours the type of learning and change experienced by the organisation. This study indicates that buffering strategies, where the firm attempts to protect itself against external influences and cant' out its existing strategy, typically lead to single-loop learning, whereby the firm teams how to perform better within its existing paradigm and at most, improves the performance of the established system - an outcome associated with first-order change. Bridging responses, where the firm adapts organisational activities to meet external expectations, typically leads a firm to acquire new behavioural capacities characteristic of double-loop learning, whereby insights and understanding are uncovered that are fundamentally different from existing knowledge and where stakeholders are brought into problem-solving conversations that enable them to influence corporate decision-making to address shortcomings in the system - an outcome associated with second-order change. Boundary redefinition suggests that the firm engages in triple-loop learning, where the firm changes relations with stakeholders in profound ways, considers problems from a whole-system perspective, examining the deep structures that sustain the system, producing innovation to address chronic problems and develop new opportunities - an outcome associated with third-order change. This study supports earlier theoretical and empirical studies {e.g. Weick's (1979, 1985) work on self-enactment; Maitlis & Lawrence's (2007) and Maitlis' (2005) work and Weick et al's (2005) work on sensegiving and sensemaking in organisations; Brickson's (2005, 2007) and Scott & Lane's (2000) work on organisational identity orientation}, which indicate that corporate self-perception is a key underlying factor driving the dynamics of organisational teaming and change. Such theorizing has important implications for managerial practice; namely, that a company which perceives itself as a 'victim' may be highly inclined to view stakeholders as a source of negative influence, and would therefore be potentially unable to benefit from the positive influence of engagement. Such a selfperception can blind the firm from seeing stakeholders in a more positive, contributing light, which suggests that such firms may not be inclined to embrace external sources of innovation and teaming, as they are focussed on protecting the firm against disturbing environmental influences (through buffering), and remain more likely to perform better within an existing paradigm (single-loop teaming). By contrast, a company that perceives itself as a 'leader' may be highly inclined to view stakeholders as a source of positive influence. On the downside, such a firm might have difficulty distinguishing when stakeholder contributions are less pertinent as it is deliberately more open to elements in operating environment (including stakeholders) as potential sources of learning and change, as the firm is oriented towards creating space for fundamental change (through boundary redefinition), opening issues to entirely new ways of thinking and addressing issues from whole-system perspective. A significant implication of this study is that potentially only those companies who see themselves as a leader are ultimately able to tap the innovation potential of stakeholder dialogue.
Resumo:
BACKGROUND: So far, none of the existing methods on Murray's law deal with the non-Newtonian behavior of blood flow although the non-Newtonian approach for blood flow modelling looks more accurate. MODELING: In the present paper, Murray's law which is applicable to an arterial bifurcation, is generalized to a non-Newtonian blood flow model (power-law model). When the vessel size reaches the capillary limitation, blood can be modeled using a non-Newtonian constitutive equation. It is assumed two different constraints in addition to the pumping power: the volume constraint or the surface constraint (related to the internal surface of the vessel). For a seek of generality, the relationships are given for an arbitrary number of daughter vessels. It is shown that for a cost function including the volume constraint, classical Murray's law remains valid (i.e. SigmaR(c) = cste with c = 3 is verified and is independent of n, the dimensionless index in the viscosity equation; R being the radius of the vessel). On the contrary, for a cost function including the surface constraint, different values of c may be calculated depending on the value of n. RESULTS: We find that c varies for blood from 2.42 to 3 depending on the constraint and the fluid properties. For the Newtonian model, the surface constraint leads to c = 2.5. The cost function (based on the surface constraint) can be related to entropy generation, by dividing it by the temperature. CONCLUSION: It is demonstrated that the entropy generated in all the daughter vessels is greater than the entropy generated in the parent vessel. Furthermore, it is shown that the difference of entropy generation between the parent and daughter vessels is smaller for a non-Newtonian fluid than for a Newtonian fluid.