60 resultados para Organization, vision, strategy, learning, monitoring
Resumo:
Rapport de synthèse : Hypoglycémies nocturnes chez les patients diabétiques de type 1 : que pouvons-nous apprendre de la mesure de la glycémie en continu ? But : les hypoglycémies nocturnes sont une complication majeure du traitement des patients diabétiques de type 1; des autocontrôles de la glycémie capillaire sont donc recommandés pour les détecter. Cependant, la majorité des hypoglycémies nocturnes ne sont pas décelées par un autocontrôle glycémique durant la nuit. La mesure de la glycémie en continu (CGMS) est une alternative intéressante. Les buts de cette étude rétrospective étaient d'évaluer la véritable incidence des hypoglycémies nocturnes chez des patients diabétiques de type 1, la meilleure période pour effectuer un autocontrôle permettant de prédire une hypoglycémie nocturne, la relation entre les hyperglycémies matinales et les hypoglycémies nocturnes (phénomène de Somogyi) ainsi que l'utilité du CGMS pour réduire les hypoglycémies nocturnes. Méthode : quatre-vingt-huit patients diabétiques de type 1 qui avaient bénéficié d'un CGMS ont été inclus. Les indications au CGMS, les hypoglycémies nocturnes et diurnes ainsi que la corrélation entre les hypoglycémies nocturnes et les hyperglycémies matinales durant le CGMS ont été enregistrées. L'efficacité du CGMS pour réduire les hypoglycémies nocturnes a été évaluée six à neuf mois après. Résultats : la prévalence des hypoglycémies nocturnes était de 67% (32% non suspectées). La sensibilité d'une hypoglycémie à prédire une hypoglycémie nocturne était de 37% (OR = 2,37, P = 0,001) lorsqu'elle survient au coucher (22-24 h) et de 43% lorsqu'elle survient à 3 h (OR = 4,60, P < 0,001). Les hypoglycémies nocturnes n'étaient pas associées à des hyperglycémies matinales, mais à des hypoglycémies matinales (OR = 3.95, P < 0.001). Six à neuf mois après le CGMS, les suspicions cliniques d'hypoglycémies nocturnes ont diminué de 60% à 14% (P < 0.001). Abstract : Aim. - In type 1 diabetic patients (TIDM), nocturnal hypoglycaemias (Nlï) are a serious complication of T1DM treatment; self-monitoring of blood glucose (SMBG) is recommended to detect them. However, the majority of NH remains undetected on an occasional SMBG done during the night. An alternative strategy is the Continuous glucose monitoring (CGMS), which retrospectively shows the glycaemic profile. The aims of this retrospective study were to evaluate the true incidence of NH in TiDM, the bèst SMBG time to predict NH, the relationship between morning hyperglycaemia and N$ (Somogyi phenomenon) and the utility of CGMS to reduce NH. Methods. -Eighty-eight T1DM who underwent a CGMS exam were included. Indications for CGMS evaluarion, hypoglycaemias and correlation with morning hyperglycaemias were recorded. The efficiency of CGMS to reduce the suspected NH was evaluated after 6-9 months. Results. -The prevalence of NH was 67% (32% of them unsuspected). A measured hypoglycaemia at bedtime (22-24 h) had a sensitivity of 37% to detect NH (OR = 2.37, P = 0.001), while a single measure <_ 4 mmol/l at 3-hour had a sensitivity of 43% (OR = 4.60, P < 0.001). NH were not associated with morning hyperglycaemias but with morning hypoglycaemias (OR = 3.95, P < 0.001). After 6-9 months, suspicions of NH decreased from 60 to 14% (P < 0.001). Conclusion. - NH were highly prevalent and often undetected. SMBG at bedtime, which detected hypoglycaemia had sensitivity almost equal to that of 3-hour and should be preferred because it is easier to perform. Somogyi phenomenon was not observed. CGMS is useful to reduce the risk of NH in 75% of patients.
Resumo:
Abstract: The expansion of a recovering population - whether re-introduced or spontaneously returning - is shaped by (i) biological (intrinsic) factors such as the land tenure system or dispersal, (ii) the distribution and availability of resources (e.g. prey), (iii) habitat and landscape features, and (iv) human attitudes and activities. In order to develop efficient conservation and recovery strategies, we need to understand all these factors and to predict the potential distribution and explore ways to reach it. An increased number of lynx in the north-western Swiss Alps in the nineties lead to a new controversy about the return of this cat. When the large carnivores were given legal protection in many European countries, most organizations and individuals promoting their protection did not foresee the consequences. Management plans describing how to handle conflicts with large predators are needed to find a balance between "overabundance" and extinction. Wildlife and conservation biologists need to evaluate the various threats confronting populations so that adequate management decisions can be taken. I developed a GIS probability model for the lynx, based on habitat information and radio-telemetry data from the Swiss Jura Mountains, in order to predict the potential distribution of the lynx in this mountain range, which is presently only partly occupied by lynx. Three of the 18 variables tested for each square kilometre describing land use, vegetation, and topography, qualified to predict the probability of lynx presence. The resulting map was evaluated with data from dispersing subadult lynx. Young lynx that were not able to establish home ranges in what was identified as good lynx habitat did not survive their first year of independence, whereas the only one that died in good lynx habitat was illegally killed. Radio-telemetry fixes are often used as input data to calibrate habitat models. Radio-telemetry is the only way to gather accurate and unbiased data on habitat use of elusive larger terrestrial mammals. However, it is time consuming and expensive, and can therefore only be applied in limited areas. Habitat models extrapolated over large areas can in turn be problematic, as habitat characteristics and availability may change from one area to the other. I analysed the predictive power of Ecological Niche Factor Analysis (ENFA) in Switzerland with the lynx as focal species. According to my results, the optimal sampling strategy to predict species distribution in an Alpine area lacking available data would be to pool presence cells from contrasted regions (Jura Mountains, Alps), whereas in regions with a low ecological variance (Jura Mountains), only local presence cells should be used for the calibration of the model. Dispersal influences the dynamics and persistence of populations, the distribution and abundance of species, and gives the communities and ecosystems their characteristic texture in space and time. Between 1988 and 2001, the spatio-temporal behaviour of subadult Eurasian lynx in two re-introduced populations in Switzerland was studied, based on 39 juvenile lynx of which 24 were radio-tagged to understand the factors influencing dispersal. Subadults become independent from their mothers at the age of 8-11 months. No sex bias neither in the dispersal rate nor in the distance moved was detected. Lynx are conservative dispersers, compared to bear and wolf, and settled within or close to known lynx occurrences. Dispersal distances reached in the high lynx density population - shorter than those reported in other Eurasian lynx studies - are limited by habitat restriction hindering connections with neighbouring metapopulations. I postulated that high lynx density would lead to an expansion of the population and validated my predictions with data from the north-western Swiss Alps where about 1995 a strong increase in lynx abundance took place. The general hypothesis that high population density will foster the expansion of the population was not confirmed. This has consequences for the re-introduction and recovery of carnivores in a fragmented landscape. To establish a strong source population in one place might not be an optimal strategy. Rather, population nuclei should be founded in several neighbouring patches. Exchange between established neighbouring subpopulations will later on take place, as adult lynx show a higher propensity to cross barriers than subadults. To estimate the potential population size of the lynx in the Jura Mountains and to assess possible corridors between this population and adjacent areas, I adapted a habitat probability model for lynx distribution in the Jura Mountains with new environmental data and extrapolated it over the entire mountain range. The model predicts a breeding population ranging from 74-101 individuals and from 51-79 individuals when continuous habitat patches < 50 km2 are disregarded. The Jura Mountains could once be part of a metapopulation, as potential corridors exist to the adjoining areas (Alps, Vosges Mountains, and Black Forest). Monitoring of the population size, spatial expansion, and the genetic surveillance in the Jura Mountains must be continued, as the status of the population is still critical. ENFA was used to predict the potential distribution of lynx in the Alps. The resulting model divided the Alps into 37 suitable habitat patches ranging from 50 to 18,711 km2, covering a total area of about 93,600 km2. When using the range of lynx densities found in field studies in Switzerland, the Alps could host a population of 961 to 1,827 residents. The results of the cost-distance analysis revealed that all patches were within the reach of dispersing lynx, as the connection costs were in the range of dispersal cost of radio-tagged subadult lynx moving through unfavorable habitat. Thus, the whole Alps could once be considered as a metapopulation. But experience suggests that only few disperser will cross unsuitable areas and barriers. This low migration rate may seldom allow the spontaneous foundation of new populations in unsettled areas. As an alternative to natural dispersal, artificial transfer of individuals across the barriers should be considered. Wildlife biologists can play a crucial role in developing adaptive management experiments to help managers learning by trial. The case of the lynx in Switzerland is a good example of a fruitful cooperation between wildlife biologists, managers, decision makers and politician in an adaptive management process. This cooperation resulted in a Lynx Management Plan which was implemented in 2000 and updated in 2004 to give the cantons directives on how to handle lynx-related problems. This plan was put into practice e.g. in regard to translocation of lynx into unsettled areas. Résumé: L'expansion d'une population en phase de recolonisation, qu'elle soit issue de réintroductions ou d'un retour naturel dépend 1) de facteurs biologiques tels que le système social et le mode de dispersion, 2) de la distribution et la disponibilité des ressources (proies), 3) de l'habitat et des éléments du paysage, 4) de l'acceptation de l'espèce par la population locale et des activités humaines. Afin de pouvoir développer des stratégies efficaces de conservation et de favoriser la recolonisation, chacun de ces facteurs doit être pris en compte. En plus, la distribution potentielle de l'espèce doit pouvoir être déterminée et enfin, toutes les possibilités pour atteindre les objectifs, examinées. La phase de haute densité que la population de lynx a connue dans les années nonante dans le nord-ouest des Alpes suisses a donné lieu à une controverse assez vive. La protection du lynx dans de nombreux pays européens, promue par différentes organisations, a entraîné des conséquences inattendues; ces dernières montrent que tout plan de gestion doit impérativement indiquer des pistes quant à la manière de gérer les conflits, tout en trouvant un équilibre entre l'extinction et la surabondance de l'espèce. Les biologistes de la conservation et de la faune sauvage doivent pour cela évaluer les différents risques encourus par les populations de lynx, afin de pouvoir rapidement prendre les meilleuresmdécisions de gestion. Un modèle d'habitat pour le lynx, basé sur des caractéristiques de l'habitat et des données radio télémétriques collectées dans la chaîne du Jura, a été élaboré afin de prédire la distribution potentielle dans cette région, qui n'est que partiellement occupée par l'espèce. Trois des 18 variables testées, décrivant pour chaque kilomètre carré l'utilisation du sol, la végétation ainsi que la topographie, ont été retenues pour déterminer la probabilité de présence du lynx. La carte qui en résulte a été comparée aux données télémétriques de lynx subadultes en phase de dispersion. Les jeunes qui n'ont pas pu établir leur domaine vital dans l'habitat favorable prédit par le modèle n'ont pas survécu leur première année d'indépendance alors que le seul individu qui est mort dans l'habitat favorable a été braconné. Les données radio-télémétriques sont souvent utilisées pour l'étalonnage de modèles d'habitat. C'est un des seuls moyens à disposition qui permette de récolter des données non biaisées et précises sur l'occupation de l'habitat par des mammifères terrestres aux moeurs discrètes. Mais ces méthodes de- mandent un important investissement en moyens financiers et en temps et peuvent, de ce fait, n'être appliquées qu'à des zones limitées. Les modèles d'habitat sont ainsi souvent extrapolés à de grandes surfaces malgré le risque d'imprécision, qui résulte des variations des caractéristiques et de la disponibilité de l'habitat d'une zone à l'autre. Le pouvoir de prédiction de l'Analyse Ecologique de la Niche (AEN) dans les zones où les données de présence n'ont pas été prises en compte dans le calibrage du modèle a été analysée dans le cas du lynx en Suisse. D'après les résultats obtenus, la meilleure mé- thode pour prédire la distribution du lynx dans une zone alpine dépourvue d'indices de présence est de combiner des données provenant de régions contrastées (Alpes, Jura). Par contre, seules les données sur la présence locale de l'espèce doivent être utilisées pour les zones présentant une faible variance écologique tel que le Jura. La dispersion influence la dynamique et la stabilité des populations, la distribution et l'abondance des espèces et détermine les caractéristiques spatiales et temporelles des communautés vivantes et des écosystèmes. Entre 1988 et 2001, le comportement spatio-temporel de lynx eurasiens subadultes de deux populations réintroduites en Suisse a été étudié, basé sur le suivi de 39 individus juvéniles dont 24 étaient munis d'un collier émetteur, afin de déterminer les facteurs qui influencent la dispersion. Les subadultes se sont séparés de leur mère à l'âge de 8 à 11 mois. Le sexe n'a pas eu d'influence sur le nombre d'individus ayant dispersés et la distance parcourue au cours de la dispersion. Comparé à l'ours et au loup, le lynx reste très modéré dans ses mouvements de dispersion. Tous les individus ayant dispersés se sont établis à proximité ou dans des zones déjà occupées par des lynx. Les distances parcourues lors de la dispersion ont été plus courtes pour la population en phase de haute densité que celles relevées par les autres études de dispersion du lynx eurasien. Les zones d'habitat peu favorables et les barrières qui interrompent la connectivité entre les populations sont les principales entraves aux déplacements, lors de la dispersion. Dans un premier temps, nous avons fait l'hypothèse que les phases de haute densité favorisaient l'expansion des populations. Mais cette hypothèse a été infirmée par les résultats issus du suivi des lynx réalisé dans le nord-ouest des Alpes, où la population connaissait une phase de haute densité depuis 1995. Ce constat est important pour la conservation d'une population de carnivores dans un habitat fragmenté. Ainsi, instaurer une forte population source à un seul endroit n'est pas forcément la stratégie la plus judicieuse. Il est préférable d'établir des noyaux de populations dans des régions voisines où l'habitat est favorable. Des échanges entre des populations avoisinantes pourront avoir lieu par la suite car les lynx adultes sont plus enclins à franchir les barrières qui entravent leurs déplacements que les individus subadultes. Afin d'estimer la taille de la population de lynx dans le Jura et de déterminer les corridors potentiels entre cette région et les zones avoisinantes, un modèle d'habitat a été utilisé, basé sur un nouveau jeu de variables environnementales et extrapolé à l'ensemble du Jura. Le modèle prédit une population reproductrice de 74 à 101 individus et de 51 à 79 individus lorsque les surfaces d'habitat d'un seul tenant de moins de 50 km2 sont soustraites. Comme des corridors potentiels existent effectivement entre le Jura et les régions avoisinantes (Alpes, Vosges, et Forêt Noire), le Jura pourrait faire partie à l'avenir d'une métapopulation, lorsque les zones avoisinantes seront colonisées par l'espèce. La surveillance de la taille de la population, de son expansion spatiale et de sa structure génétique doit être maintenue car le statut de cette population est encore critique. L'AEN a également été utilisée pour prédire l'habitat favorable du lynx dans les Alpes. Le modèle qui en résulte divise les Alpes en 37 sous-unités d'habitat favorable dont la surface varie de 50 à 18'711 km2, pour une superficie totale de 93'600 km2. En utilisant le spectre des densités observées dans les études radio-télémétriques effectuées en Suisse, les Alpes pourraient accueillir une population de lynx résidents variant de 961 à 1'827 individus. Les résultats des analyses de connectivité montrent que les sous-unités d'habitat favorable se situent à des distances telles que le coût de la dispersion pour l'espèce est admissible. L'ensemble des Alpes pourrait donc un jour former une métapopulation. Mais l'expérience montre que très peu d'individus traverseront des habitats peu favorables et des barrières au cours de leur dispersion. Ce faible taux de migration rendra difficile toute nouvelle implantation de populations dans des zones inoccupées. Une solution alternative existe cependant : transférer artificiellement des individus d'une zone à l'autre. Les biologistes spécialistes de la faune sauvage peuvent jouer un rôle important et complémentaire pour les gestionnaires de la faune, en les aidant à mener des expériences de gestion par essai. Le cas du lynx en Suisse est un bel exemple d'une collaboration fructueuse entre biologistes de la faune sauvage, gestionnaires, organes décisionnaires et politiciens. Cette coopération a permis l'élaboration du Concept Lynx Suisse qui est entré en vigueur en 2000 et remis à jour en 2004. Ce plan donne des directives aux cantons pour appréhender la problématique du lynx. Il y a déjà eu des applications concrètes sur le terrain, notamment par des translocations d'individus dans des zones encore inoccupées.
Resumo:
The hydrogeological properties and responses of a productive aquifer in northeastern Switzerland are investigated. For this purpose, 3D crosshole electrical resistivity tomography (ERT) is used to define the main lithological structures within the aquifer (through static inversion) and to monitor the water infiltration from an adjacent river. During precipitation events and subsequent river flooding, the river water resistivity increases. As a consequence, the electrical characteristics of the infiltrating water can be used as a natural tracer to delineate preferential flow paths and flow velocities. The focus is primarily on the experiment installation, data collection strategy, and the structural characterization of the site and a brief overview of the ERT monitoring results. The monitoring system comprises 18 boreholes each equipped with 10 electrodes straddling the entire thickness of the gravel aquifer. A multi-channel resistivity system programmed to cycle through various four-point electrode configurations of the 180 electrodes in a rolling sequence allows for the measurement of approximately 15,500 apparent resistivity values every 7 h on a continuous basis. The 3D static ERT inversion of data acquired under stable hydrological conditions provides a base model for future time-lapse inversion studies and the means to investigate the resolving capability of our acquisition scheme. In particular, it enables definition of the main lithological structures within the aquifer. The final ERT static model delineates a relatively high-resistivity, low-porosity, intermediate-depth layer throughout the investigated aquifer volume that is consistent with results from well logging and seismic and radar tomography models. The next step will be to define and implement an appropriate time-lapse ERT inversion scheme using the river water as a natural tracer. The main challenge will be to separate the superposed time-varying effects of water table height, temperature, and salinity variations associated with the infiltrating water.
Resumo:
There is a sustained controversy in the literature about the role and utility of self-monitoring of blood glucose (SMBG) in type 2 diabetes. The study results in this field do not provide really useful clues for the integration of SMBG in the follow-up of the individual patient, because they are based on a misconception of SMBG. It is studied as if it was a medical treatment whose effect on glycemic control is to be isolated. However, SMBG has no such intrinsic effect. It gains its purpose only as an inseparable component of a comprehensive and structured educational strategy. To be appropriate this strategy cannot be based on the health care professionals' view on diabetes only. It rather has to be tailored to the individual patient's needs through an ongoing process of shared reflection with him.
Resumo:
Les plantes sont essentielles pour les sociétés humaines. Notre alimentation quotidienne, les matériaux de constructions et les sources énergétiques dérivent de la biomasse végétale. En revanche, la compréhension des multiples aspects développementaux des plantes est encore peu exploitée et représente un sujet de recherche majeur pour la science. L'émergence des technologies à haut débit pour le séquençage de génome à grande échelle ou l'imagerie de haute résolution permet à présent de produire des quantités énormes d'information. L'analyse informatique est une façon d'intégrer ces données et de réduire la complexité apparente vers une échelle d'abstraction appropriée, dont la finalité est de fournir des perspectives de recherches ciblées. Ceci représente la raison première de cette thèse. En d'autres termes, nous appliquons des méthodes descriptives et prédictives combinées à des simulations numériques afin d'apporter des solutions originales à des problèmes relatifs à la morphogénèse à l'échelle de la cellule et de l'organe. Nous nous sommes fixés parmi les objectifs principaux de cette thèse d'élucider de quelle manière l'interaction croisée des phytohormones auxine et brassinosteroïdes (BRs) détermine la croissance de la cellule dans la racine du méristème apical d'Arabidopsis thaliana, l'organisme modèle de référence pour les études moléculaires en plantes. Pour reconstruire le réseau de signalement cellulaire, nous avons extrait de la littérature les informations pertinentes concernant les relations entre les protéines impliquées dans la transduction des signaux hormonaux. Le réseau a ensuite été modélisé en utilisant un formalisme logique et qualitatif pour pallier l'absence de données quantitatives. Tout d'abord, Les résultats ont permis de confirmer que l'auxine et les BRs agissent en synergie pour contrôler la croissance de la cellule, puis, d'expliquer des observations phénotypiques paradoxales et au final, de mettre à jour une interaction clef entre deux protéines dans la maintenance du méristème de la racine. Une étude ultérieure chez la plante modèle Brachypodium dystachion (Brachypo- dium) a révélé l'ajustement du réseau d'interaction croisée entre auxine et éthylène par rapport à Arabidopsis. Chez ce dernier, interférer avec la biosynthèse de l'auxine mène à la formation d'une racine courte. Néanmoins, nous avons isolé chez Brachypodium un mutant hypomorphique dans la biosynthèse de l'auxine qui affiche une racine plus longue. Nous avons alors conduit une analyse morphométrique qui a confirmé que des cellules plus anisotropique (plus fines et longues) sont à l'origine de ce phénotype racinaire. Des analyses plus approfondies ont démontré que la différence phénotypique entre Brachypodium et Arabidopsis s'explique par une inversion de la fonction régulatrice dans la relation entre le réseau de signalisation par l'éthylène et la biosynthèse de l'auxine. L'analyse morphométrique utilisée dans l'étude précédente exploite le pipeline de traitement d'image de notre méthode d'histologie quantitative. Pendant la croissance secondaire, la symétrie bilatérale de l'hypocotyle est remplacée par une symétrie radiale et une organisation concentrique des tissus constitutifs. Ces tissus sont initialement composés d'une douzaine de cellules mais peuvent aisément atteindre des dizaines de milliers dans les derniers stades du développement. Cette échelle dépasse largement le seuil d'investigation par les moyens dits 'traditionnels' comme l'imagerie directe de tissus en profondeur. L'étude de ce système pendant cette phase de développement ne peut se faire qu'en réalisant des coupes fines de l'organe, ce qui empêche une compréhension des phénomènes cellulaires dynamiques sous-jacents. Nous y avons remédié en proposant une stratégie originale nommée, histologie quantitative. De fait, nous avons extrait l'information contenue dans des images de très haute résolution de sections transverses d'hypocotyles en utilisant un pipeline d'analyse et de segmentation d'image à grande échelle. Nous l'avons ensuite combiné avec un algorithme de reconnaissance automatique des cellules. Cet outil nous a permis de réaliser une description quantitative de la progression de la croissance secondaire révélant des schémas développementales non-apparents avec une inspection visuelle classique. La formation de pôle de phloèmes en structure répétée et espacée entre eux d'une longueur constante illustre les bénéfices de notre approche. Par ailleurs, l'exploitation approfondie de ces résultats a montré un changement de croissance anisotropique des cellules du cambium et du phloème qui semble en phase avec l'expansion du xylème. Combinant des outils génétiques et de la modélisation biomécanique, nous avons démontré que seule la croissance plus rapide des tissus internes peut produire une réorientation de l'axe de croissance anisotropique des tissus périphériques. Cette prédiction a été confirmée par le calcul du ratio des taux de croissance du xylème et du phloème au cours de développement secondaire ; des ratios élevés sont effectivement observés et concomitant à l'établissement progressif et tangentiel du cambium. Ces résultats suggèrent un mécanisme d'auto-organisation établi par un gradient de division méristématique qui génèrent une distribution de contraintes mécaniques. Ceci réoriente la croissance anisotropique des tissus périphériques pour supporter la croissance secondaire. - Plants are essential for human society, because our daily food, construction materials and sustainable energy are derived from plant biomass. Yet, despite this importance, the multiple developmental aspects of plants are still poorly understood and represent a major challenge for science. With the emergence of high throughput devices for genome sequencing and high-resolution imaging, data has never been so easy to collect, generating huge amounts of information. Computational analysis is one way to integrate those data and to decrease the apparent complexity towards an appropriate scale of abstraction with the aim to eventually provide new answers and direct further research perspectives. This is the motivation behind this thesis work, i.e. the application of descriptive and predictive analytics combined with computational modeling to answer problems that revolve around morphogenesis at the subcellular and organ scale. One of the goals of this thesis is to elucidate how the auxin-brassinosteroid phytohormone interaction determines the cell growth in the root apical meristem of Arabidopsis thaliana (Arabidopsis), the plant model of reference for molecular studies. The pertinent information about signaling protein relationships was obtained through the literature to reconstruct the entire hormonal crosstalk. Due to a lack of quantitative information, we employed a qualitative modeling formalism. This work permitted to confirm the synergistic effect of the hormonal crosstalk on cell elongation, to explain some of our paradoxical mutant phenotypes and to predict a novel interaction between the BREVIS RADIX (BRX) protein and the transcription factor MONOPTEROS (MP),which turned out to be critical for the maintenance of the root meristem. On the same subcellular scale, another study in the monocot model Brachypodium dystachion (Brachypodium) revealed an alternative wiring of auxin-ethylene crosstalk as compared to Arabidopsis. In the latter, increasing interference with auxin biosynthesis results in progressively shorter roots. By contrast, a hypomorphic Brachypodium mutant isolated in this study in an enzyme of the auxin biosynthesis pathway displayed a dramatically longer seminal root. Our morphometric analysis confirmed that more anisotropic cells (thinner and longer) are principally responsible for the mutant root phenotype. Further characterization pointed towards an inverted regulatory logic in the relation between ethylene signaling and auxin biosynthesis in Brachypodium as compared to Arabidopsis, which explains the phenotypic discrepancy. Finally, the morphometric analysis of hypocotyl secondary growth that we applied in this study was performed with the image-processing pipeline of our quantitative histology method. During its secondary growth, the hypocotyl reorganizes its primary bilateral symmetry to a radial symmetry of highly specialized tissues comprising several thousand cells, starting with a few dozens. However, such a scale only permits observations in thin cross-sections, severely hampering a comprehensive analysis of the morphodynamics involved. Our quantitative histology strategy overcomes this limitation. We acquired hypocotyl cross-sections from tiled high-resolution images and extracted their information content using custom high-throughput image processing and segmentation. Coupled with an automated cell type recognition algorithm, it allows precise quantitative characterization of vascular development and reveals developmental patterns that were not evident from visual inspection, for example the steady interspace distance of the phloem poles. Further analyses indicated a change in growth anisotropy of cambial and phloem cells, which appeared in phase with the expansion of xylem. Combining genetic tools and computational modeling, we showed that the reorientation of growth anisotropy axis of peripheral tissue layers only occurs when the growth rate of central tissue is higher than the peripheral one. This was confirmed by the calculation of the ratio of the growth rate xylem to phloem throughout secondary growth. High ratios are indeed observed and concomitant with the homogenization of cambium anisotropy. These results suggest a self-organization mechanism, promoted by a gradient of division in the cambium that generates a pattern of mechanical constraints. This, in turn, reorients the growth anisotropy of peripheral tissues to sustain the secondary growth.
Resumo:
Résumé: Les gouvernements des pays occidentaux ont dépensé des sommes importantes pour faciliter l'intégration des technologies de l'information et de la communication dans l'enseignement espérant trouver une solution économique à l'épineuse équation que l'on pourrait résumer par la célèbre formule " faire plus et mieux avec moins ". Cependant force est de constater que, malgré ces efforts et la très nette amélioration de la qualité de service des infrastructures, cet objectif est loin d'être atteint. Si nous pensons qu'il est illusoire d'attendre et d'espérer que la technologie peut et va, à elle seule, résoudre les problèmes de qualité de l'enseignement, nous croyons néanmoins qu'elle peut contribuer à améliorer les conditions d'apprentissage et participer de la réflexion pédagogique que tout enseignant devrait conduire avant de dispenser ses enseignements. Dans cette optique, et convaincu que la formation à distance offre des avantages non négligeables à condition de penser " autrement " l'enseignement, nous nous sommes intéressé à la problématique du développement de ce type d'applications qui se situent à la frontière entre les sciences didactiques, les sciences cognitives, et l'informatique. Ainsi, et afin de proposer une solution réaliste et simple permettant de faciliter le développement, la mise-à-jour, l'insertion et la pérennisation des applications de formation à distance, nous nous sommes impliqué dans des projets concrets. Au fil de notre expérience de terrain nous avons fait le constat que (i)la qualité des modules de formation flexible et à distance reste encore très décevante, entre autres parce que la valeur ajoutée que peut apporter l'utilisation des technologies n'est, à notre avis, pas suffisamment exploitée et que (ii)pour réussir tout projet doit, outre le fait d'apporter une réponse utile à un besoin réel, être conduit efficacement avec le soutien d'un " champion ". Dans l'idée de proposer une démarche de gestion de projet adaptée aux besoins de la formation flexible et à distance, nous nous sommes tout d'abord penché sur les caractéristiques de ce type de projet. Nous avons ensuite analysé les méthodologies de projet existantes dans l'espoir de pouvoir utiliser l'une, l'autre ou un panachage adéquat de celles qui seraient les plus proches de nos besoins. Nous avons ensuite, de manière empirique et par itérations successives, défini une démarche pragmatique de gestion de projet et contribué à l'élaboration de fiches d'aide à la décision facilitant sa mise en oeuvre. Nous décrivons certains de ses acteurs en insistant particulièrement sur l'ingénieur pédagogique que nous considérons comme l'un des facteurs clé de succès de notre démarche et dont la vocation est de l'orchestrer. Enfin, nous avons validé a posteriori notre démarche en revenant sur le déroulement de quatre projets de FFD auxquels nous avons participé et qui sont représentatifs des projets que l'on peut rencontrer dans le milieu universitaire. En conclusion nous pensons que la mise en oeuvre de notre démarche, accompagnée de la mise à disposition de fiches d'aide à la décision informatisées, constitue un atout important et devrait permettre notamment de mesurer plus aisément les impacts réels des technologies (i) sur l'évolution de la pratique des enseignants, (ii) sur l'organisation et (iii) sur la qualité de l'enseignement. Notre démarche peut aussi servir de tremplin à la mise en place d'une démarche qualité propre à la FFD. D'autres recherches liées à la réelle flexibilisation des apprentissages et aux apports des technologies pour les apprenants pourront alors être conduites sur la base de métriques qui restent à définir. Abstract: Western countries have spent substantial amount of monies to facilitate the integration of the Information and Communication Technologies (ICT) into Education hoping to find a solution to the touchy equation that can be summarized by the famous statement "do more and better with less". Despite these efforts, and notwithstanding the real improvements due to the undeniable betterment of the infrastructure and of the quality of service, this goal is far from reached. Although we think it illusive to expect technology, all by itself, to solve our economical and educational problems, we firmly take the view that it can greatly contribute not only to ameliorate learning conditions but participate to rethinking the pedagogical approach as well. Every member of our community could hence take advantage of this opportunity to reflect upon his or her strategy. In this framework, and convinced that integrating ICT into education opens a number of very interesting avenues provided we think teaching "out of the box", we got ourself interested in courseware development positioned at the intersection of didactics and pedagogical sciences, cognitive sciences and computing. Hence, and hoping to bring a realistic and simple solution that could help develop, update, integrate and sustain courseware we got involved in concrete projects. As ze gained field experience we noticed that (i)The quality of courseware is still disappointing, amongst others, because the added value that the technology can bring is not made the most of, as it could or should be and (ii)A project requires, besides bringing a useful answer to a real problem, to be efficiently managed and be "championed". Having in mind to propose a pragmatic and practical project management approach we first looked into open and distance learning characteristics. We then analyzed existing methodologies in the hope of being able to utilize one or the other or a combination to best fit our needs. In an empiric manner and proceeding by successive iterations and refinements, we defined a simple methodology and contributed to build descriptive "cards" attached to each of its phases to help decision making. We describe the different actors involved in the process insisting specifically on the pedagogical engineer, viewed as an orchestra conductor, whom we consider to be critical to ensure the success of our approach. Last but not least, we have validated a posteriori our methodology by reviewing four of the projects we participated to and that we think emblematic of the university reality. We believe that the implementation of our methodology, along with the availability of computerized cards to help project managers to take decisions, could constitute a great asset and contribute to measure the technologies' real impacts on (i) the evolution of teaching practices (ii) the organization and (iii) the quality of pedagogical approaches. Our methodology could hence be of use to help put in place an open and distance learning quality assessment. Research on the impact of technologies to learning adaptability and flexibilization could rely on adequate metrics.
Resumo:
Ignoring irrelevant visual information aids efficient interaction with task environments. We studied how people, after practice, start to ignore the irrelevant aspects of stimuli. For this we focused on how information reduction transfers to rarely practised and novel stimuli. In Experiment 1, we compared competing mathematical models on how people cease to fixate on irrelevant parts of stimuli. Information reduction occurred at the same rate for frequent, infrequent, and novel stimuli. Once acquired with some stimuli, it was applied to all. In Experiment 2, simplification of task processing also occurred in a once-for-all manner when spatial regularities were ruled out so that people could not rely on learning which screen position is irrelevant. Apparently, changes in eye movements were an effect of a once-for-all strategy change rather than a cause of it. Overall, the results suggest that participants incidentally acquired knowledge about regularities in the task material and then decided to voluntarily apply it for efficient task processing. Such decisions should be incorporated into accounts of information reduction and other theories of strategy change in skill acquisition.
Resumo:
OBJECTIVE: This study aimed to survey current practices in European epilepsy monitoring units (EMUs) with emphasis on safety issues. METHODS: A 37-item questionnaire investigating characteristics and organization of EMUs, including measures for prevention and management of seizure-related serious adverse events (SAEs), was distributed to all identified European EMUs plus one located in Israel (N=150). RESULTS: Forty-eight (32%) EMUs, located in 18 countries, completed the questionnaire. Epilepsy monitoring unit beds are 1-2 in 43%, 3-4 in 34%, and 5-6 in 19% of EMUs; staff physicians are 1-2 in 32%, 3-4 in 34%, and 5-6 in 19% of EMUs. Personnel operating in EMUs include epileptologists (in 69% of EMUs), clinical neurophysiologists trained in epilepsy (in 46% of EMUs), child neurologists (in 35% of EMUs), neurology and clinical neurophysiology residents (in 46% and in 8% of EMUs, respectively), and neurologists not trained in epilepsy (in 27% of EMUs). In 20% of EMUs, patients' observation is only intermittent or during the daytime and primarily carried out by neurophysiology technicians and/or nurses (in 71% of EMUs) or by patients' relatives (in 40% of EMUs). Automatic detection systems for seizures are used in 15%, for body movements in 8%, for oxygen desaturation in 33%, and for ECG abnormalities in 17% of EMUs. Protocols for management of acute seizures are lacking in 27%, of status epilepticus in 21%, and of postictal psychoses in 87% of EMUs. Injury prevention consists of bed protections in 96% of EMUs, whereas antisuffocation pillows are employed in 21%, and environmental protections in monitoring rooms and in bathrooms are implemented in 38% and in 25% of EMUs, respectively. The most common SAEs were status epilepticus reported by 79%, injuries by 73%, and postictal psychoses by 67% of EMUs. CONCLUSIONS: All EMUs have faced different types of SAEs. Wide variation in practice patterns and lack of protocols and of precautions to ensure patients' safety might promote the occurrence and severity of SAEs. Our findings highlight the need for standardized and shared protocols for an effective and safe management of patients in EMUs.
Resumo:
BACKGROUND: The need to contextualise wastewater-based figures about illicit drug consumption by comparing them with other indicators has been stressed by numerous studies. The objective of the present study was to further investigate the possibility of combining wastewater data to conventional statistics to assess the reliability of the former method and obtain a more balanced picture of illicit drug consumption in the investigated area. METHODS: Wastewater samples were collected between October 2013 and July 2014 in the metropolitan area of Lausanne (226,000 inhabitants), Switzerland. Methadone, its metabolite 2-ethylidene-1,5-dimethyl-3,3-diphenylpyrrolidine (EDDP), the exclusive metabolite of heroin, 6-monoacetylmorphine (6-MAM), and morphine loads were used to estimate the amounts of methadone and heroin consumed. RESULTS: Methadone consumption estimated from EDDP was in agreement with the expectations. Heroin estimates based on 6-MAM loads were inconsistent. Estimates obtained from morphine loads, combined to prescription/sales data, were in agreement with figures derived from syringe distribution data and general population surveys. CONCLUSIONS: The results obtained for methadone allowed assessing the reliability of the selected sampling strategy, supporting its ability to capture the consumption of a small cohort (i.e., 743 patients). Using morphine as marker, in combination with prescription/sales data, estimates in accordance with other indicators about heroin use were obtained. Combining different sources of data allowed strengthening the results and suggested that the different indicators (i.e., administration route, average dosage and number of consumers) contribute to depict a realistic representation of the phenomenon in the investigated area. Heroin consumption was estimated to approximately 13gday(-1) (118gday(-1) at street level).
Resumo:
After incidentally learning about a hidden regularity, participants can either continue to solve the task as instructed or, alternatively, apply a shortcut. Past research suggests that the amount of conflict implied by adopting a shortcut seems to bias the decision for vs. against continuing instruction-coherent task processing. We explored whether this decision might transfer from one incidental learning task to the next. Theories that conceptualize strategy change in incidental learning as a learning-plus-decision phenomenon suggest that high demands to adhere to instruction-coherent task processing in Task 1 will impede shortcut usage in Task 2, whereas low control demands will foster it. We sequentially applied two established incidental learning tasks differing in stimuli, responses and hidden regularity (the alphabet verification task followed by the serial reaction task, SRT). While some participants experienced a complete redundancy in the task material of the alphabet verification task (low demands to adhere to instructions), for others the redundancy was only partial. Thus, shortcut application would have led to errors (high demands to follow instructions). The low control demand condition showed the strongest usage of the fixed and repeating sequence of responses in the SRT. The transfer results are in line with the learning-plus-decision view of strategy change in incidental learning, rather than with resource theories of self-control.
Resumo:
The Rare Cancer Network (RCN) was formed in the early 1990's to create a global network that could pool knowledge and resources in the studies of rare malignancies whose infrequency prevented both their study with prospective clinical trials. To date, the RCN has initiated 74 studies resulting in 46 peer reviewed publications. The First International Symposium of the Rare Cancer Network took place in Nice in March of 2014. Status updates and proposals for new studies were heard for fifteen topics. Ongoing studies continue for cardiac sarcomas, thyroid cancers, glomus tumors, and adult medulloblastomas. New proposals were presented at the symposium for primary hepatic lymphoma, solitary fibrous tumors, Rosai-Dorfman disease, tumors of the ampulla of Vater, salivary gland tumors, anorectal melanoma, midline nuclear protein in testes carcinoma, pulmonary lymphoepithelioma-like carcinoma, adenoid cystic carcinoma of the trachea, osteosarcomas of the mandible, and extra-cranial hemangiopericytoma. This manuscript presents the abstracts of those proposals and updates on ongoing studies, as well a brief summary of the vision and future of the RCN.
Resumo:
The variability observed in drug exposure has a direct impact on the overall response to drug. The largest part of variability between dose and drug response resides in the pharmacokinetic phase, i.e. in the dose-concentration relationship. Among possibilities offered to clinicians, Therapeutic Drug Monitoring (TDM; Monitoring of drug concentration measurements) is one of the useful tool to guide pharmacotherapy. TDM aims at optimizing treatments by individualizing dosage regimens based on blood drug concentration measurement. Bayesian calculations, relying on population pharmacokinetic approach, currently represent the gold standard TDM strategy. However, it requires expertise and computational assistance, thus limiting its large implementation in routine patient care. The overall objective of this thesis was to implement robust tools to provide Bayesian TDM to clinician in modern routine patient care. To that endeavour, aims were (i) to elaborate an efficient and ergonomic computer tool for Bayesian TDM: EzeCHieL (ii) to provide algorithms for drug concentration Bayesian forecasting and software validation, relying on population pharmacokinetics (iii) to address some relevant issues encountered in clinical practice with a focus on neonates and drug adherence. First, the current stage of the existing software was reviewed and allows establishing specifications for the development of EzeCHieL. Then, in close collaboration with software engineers a fully integrated software, EzeCHieL, has been elaborated. EzeCHieL provides population-based predictions and Bayesian forecasting and an easy-to-use interface. It enables to assess the expectedness of an observed concentration in a patient compared to the whole population (via percentiles), to assess the suitability of the predicted concentration relative to the targeted concentration and to provide dosing adjustment. It allows thus a priori and a posteriori Bayesian drug dosing individualization. Implementation of Bayesian methods requires drug disposition characterisation and variability quantification trough population approach. Population pharmacokinetic analyses have been performed and Bayesian estimators have been provided for candidate drugs in population of interest: anti-infectious drugs administered to neonates (gentamicin and imipenem). Developed models were implemented in EzeCHieL and also served as validation tool in comparing EzeCHieL concentration predictions against predictions from the reference software (NONMEM®). Models used need to be adequate and reliable. For instance, extrapolation is not possible from adults or children to neonates. Therefore, this work proposes models for neonates based on the developmental pharmacokinetics concept. Patients' adherence is also an important concern for drug models development and for a successful outcome of the pharmacotherapy. A last study attempts to assess impact of routine patient adherence measurement on models definition and TDM interpretation. In conclusion, our results offer solutions to assist clinicians in interpreting blood drug concentrations and to improve the appropriateness of drug dosing in routine clinical practice.
Resumo:
This report synthesizes the findings of 11 country reports on policy learning in labour market and social policies that were conducted as part of WP5 of the INSPIRES project, which is funded by the 7th Framework Program of the EU-Commission. Notably, this report puts forward objectives of policy learning, discusses tools, processes and institutions of policy learning and presents the impacts of various tools and structures of the policy learning infrastructure for the actual policy learning process. The report defines three objectives of policy learning: evaluation and assessment of policy effectiveness, vision building and planning, and consensus building. In the 11 countries under consideration, the tools and processes of the policy learning, infrastructure can be classified into three broad groups: public bodies, expert councils, and parties, interest groups and the private sector. Finally, we develop four recommendations for policy learning: Firstly, learning processes should keep the balance between centralisation and plurality. Secondly, learning processes should be kept stable beyond the usual political business cycles. Thirdly, policy learning tools and infrastructures should be sufficiently independent from political influence or bias. Fourth, Policy learning tools and infrastructures should balance out mere effectiveness, evaluation and vision building.
Resumo:
This review presents the evolution of steroid analytical techniques, including gas chromatography coupled to mass spectrometry (GC-MS), immunoassay (IA) and targeted liquid chromatography coupled to mass spectrometry (LC-MS), and it evaluates the potential of extended steroid profiles by a metabolomics-based approach, namely steroidomics. Steroids regulate essential biological functions including growth and reproduction, and perturbations of the steroid homeostasis can generate serious physiological issues; therefore, specific and sensitive methods have been developed to measure steroid concentrations. GC-MS measuring several steroids simultaneously was considered the first historical standard method for analysis. Steroids were then quantified by immunoassay, allowing a higher throughput; however, major drawbacks included the measurement of a single compound instead of a panel and cross-reactivity reactions. Targeted LC-MS methods with selected reaction monitoring (SRM) were then introduced for quantifying a small steroid subset without the problems of cross-reactivity. The next step was the integration of metabolomic approaches in the context of steroid analyses. As metabolomics tends to identify and quantify all the metabolites (i.e., the metabolome) in a specific system, appropriate strategies were proposed for discovering new biomarkers. Steroidomics, defined as the untargeted analysis of the steroid content in a sample, was implemented in several fields, including doping analysis, clinical studies, in vivo or in vitro toxicology assays, and more. This review discusses the current analytical methods for assessing steroid changes and compares them to steroidomics. Steroids, their pathways, their implications in diseases and the biological matrices in which they are analysed will first be described. Then, the different analytical strategies will be presented with a focus on their ability to obtain relevant information on the steroid pattern. The future technical requirements for improving steroid analysis will also be presented.
Resumo:
The detection of testosterone abuse in sports is routinely achieved through the 'steroidal module' of the Athlete Biological Passport by GC-MS(/MS) quantification of selected endogenous anabolic androgenic steroids (EAAS) from athletes' urines. To overcome some limitations of the "urinary steroid profile" such as the presence of confounding factors (ethnicity, enzyme polymorphism, bacterial contamination, and ethanol), ultrahigh performance liquid chromatography (UHPLC) measurements of blood concentrations of testosterone, its major metabolites, and precursors could represent an interesting and complementary strategy. In this work, two UHPLC-MS/MS methods were developed for the quantification of testosterone and related compounds in human serum, including major progestogens, corticoids, and estrogens. The validated methods were then used for the analyses of serum samples collected from 19 healthy male volunteers after oral and transdermal testosterone administration. Results from unsupervised multiway analysis allowed variations of target analytes to be assessed simultaneously over a 96-h time period. Except for alteration of concentration values due to the circadian rhythm, which concerns mainly corticosteroids, DHEA, and progesterone, significant variations linked to the oral and transdermal testosterone administration were observed for testosterone, DHT, and androstenedione. As a second step of analysis, the longitudinal monitoring of these biomarkers using intra-individual thresholds showed, in comparison to urine, significant improvements in the detection of testosterone administration, especially for volunteers with del/del genotype for phase II UGT2B17 enzyme, not sensitive to the main urinary marker, T/E ratio. A substantial extension of the detection window after transdermal testosterone administration was also observed in serum matrix. The longitudinal follow-up proposed in this study represents a first example of 'blood steroid profile' in doping control analysis, which can be proposed in the future as a complement to the 'urinary module' for improving steroid abuse detection capabilities.