890 resultados para phylogeography, consensus approach, ensemble modeling, Pleistocene, ENM, ecological niche modeling
Resumo:
Globalization and new information technologies mean that organizations have to face world-wide competition in rapidly transforming, unpredictable environments, and thus the ability to constantly generate novel and improved products, services and processes has become quintessential for organizational success. Performance in turbulent environments is, above all, influenced by the organization's capability for renewal. Renewal capability consists of the ability of the organization to replicate, adapt, develop and change its assets, capabilities and strategies. An organization with a high renewal capability can sustain its current success factors while at the same time building new strengths for the future. This capability does not only mean that the organization is able to respond to today's challenges and to keep up with the changes in its environment, but also that it can actas a forerunner by creating innovations, both at the tactical and strategic levels of operation and thereby change the rules of the market. However, even though it is widely agreed that the dynamic capability for continuous learning, development and renewal is a major source of competitive advantage, there is no widely shared view on how organizational renewal capability should be defined, and the field is characterized by a plethora of concepts and definitions. Furthermore,there is a lack of methods for systematically assessing organizational renewal capability. The dissertation aims to bridge these gaps in the existing research by constructing an integrative theoretical framework for organizational renewal capability and by presenting a method for modeling and measuring this capability. The viability of the measurement tool is demonstrated in several contexts, andthe framework is also applied to assess renewal in inter-organizational networks. In this dissertation, organizational renewal capability is examined by drawing on three complimentary theoretical perspectives: knowledge management, strategic management and intellectual capital. The knowledge management perspective considers knowledge as inherently social and activity-based, and focuses on the organizational processes associated with its application and development. Within this framework, organizational renewal capability is understood as the capacity for flexible knowledge integration and creation. The strategic management perspective, on the other hand, approaches knowledge in organizations from the standpoint of its implications for the creation of competitive advantage. In this approach, organizational renewal is framed as the dynamic capability of firms. The intellectual capital perspective is focused on exploring how intangible assets can be measured, reported and communicated. From this vantage point, renewal capability is comprehended as the dynamic dimension of intellectual capital, which consists of the capability to maintain, modify and create knowledge assets. Each of the perspectives significantly contributes to the understanding of organizationalrenewal capability, and the integrative approach presented in this dissertationcontributes to the individual perspectives as well as to the understanding of organizational renewal capability as a whole.
Resumo:
We present the global phylogeography of the black sea urchin Arbacia lixula, an amphi-Atlantic echinoid with potential to strongly impact shallow rocky ecosystems. Sequences of the mitochondrial cytochrome c oxidase gene of 604 specimens from 24 localities were obtained, covering most of the distribution area of the species, including the Mediterranean and both shores of the Atlantic. Genetic diversity measures, phylogeographic patterns, demographic parameters and population differentiation were analysed. We found high haplotype diversity but relatively low nucleotide diversity, with 176 haplotypes grouped within three haplogroups: one is shared between Eastern Atlantic (including Mediterranean) and Brazilian populations, the second is found in Eastern Atlantic and the Mediterranean and the third is exclusively from Brazil. Significant genetic differentiation was found between Brazilian, Eastern Atlantic and Mediterranean regions, but no differentiation was found among Mediterranean sub-basins or among Eastern Atlantic sub-regions. The star-shaped topology of the haplotype network and the unimodal mismatch distributions of Mediterranean and Eastern Atlantic samples suggest that these populations have suffered very recent demographic expansions. These expansions could be dated 94-205 kya in the Mediterranean, and 31-67 kya in the Eastern Atlantic. In contrast, Brazilian populations did not show any signature of population expansion. Our results indicate that all populations of A. lixula constitute a single species. The Brazilian populations probably diverged from an Eastern Atlantic stock. The present-day genetic structure of the species in Eastern Atlantic and the Mediterranean is shaped by very recent demographic processes. Our results support the view (backed by the lack of fossil record) that A. lixula is a recent thermophilous colonizer which spread throughout the Mediterranean during a warm period of the Pleistocene, probably during the last interglacial. Implications for the possible future impact of A. lixula on shallow Mediterranean ecosystems in the context of global warming trends must be considered.
Resumo:
Over the past two decades, an increasing amount of phylogeographic work has substantially improved our understanding of African biogeography, in particular the role played by Pleistocene pluvial-drought cycles on terrestrial vertebrates. However, still little is known on the evolutionary history of semi-aquatic animals, which faced tremendous challenges imposed by unpredictable availability of water resources. In this study, we investigate the Late Pleistocene history of the common hippopotamus (Hippopotamus amphibius), using mitochondrial and nuclear DNA sequence variation and range-wide sampling. We documented a global demographic and spatial expansion approximately 0.1-0.3 Myr ago, most likely associated with an episode of massive drainage overflow. These events presumably enabled a historical continent-wide gene flow among hippopotamus populations, and hence, no clear continental-scale genetic structuring remains. Nevertheless, present-day hippopotamus populations are genetically disconnected, probably as a result of the mid-Holocene aridification and contemporary anthropogenic pressures. This unique pattern contrasts with the biogeographic paradigms established for savannah-adapted ungulate mammals and should be further investigated in other water-associated taxa. Our study has important consequences for the conservation of the hippo, an emblematic but threatened species that requires specific protection to curtail its long-term decline.
Resumo:
Maximum entropy modeling (Maxent) is a widely used algorithm for predicting species distributions across space and time. Properly assessing the uncertainty in such predictions is non-trivial and requires validation with independent datasets. Notably, model complexity (number of model parameters) remains a major concern in relation to overfitting and, hence, transferability of Maxent models. An emerging approach is to validate the cross-temporal transferability of model predictions using paleoecological data. In this study, we assess the effect of model complexity on the performance of Maxent projections across time using two European plant species (Alnus giutinosa (L.) Gaertn. and Corylus avellana L) with an extensive late Quaternary fossil record in Spain as a study case. We fit 110 models with different levels of complexity under present time and tested model performance using AUC (area under the receiver operating characteristic curve) and AlCc (corrected Akaike Information Criterion) through the standard procedure of randomly partitioning current occurrence data. We then compared these results to an independent validation by projecting the models to mid-Holocene (6000 years before present) climatic conditions in Spain to assess their ability to predict fossil pollen presence-absence and abundance. We find that calibrating Maxent models with default settings result in the generation of overly complex models. While model performance increased with model complexity when predicting current distributions, it was higher with intermediate complexity when predicting mid-Holocene distributions. Hence, models of intermediate complexity resulted in the best trade-off to predict species distributions across time. Reliable temporal model transferability is especially relevant for forecasting species distributions under future climate change. Consequently, species-specific model tuning should be used to find the best modeling settings to control for complexity, notably with paleoecological data to independently validate model projections. For cross-temporal projections of species distributions for which paleoecological data is not available, models of intermediate complexity should be selected.
Resumo:
Formation of nanosized droplets/bubbles from a metastable bulk phase is connected to many unresolved scientific questions. We analyze the properties and stability of multicomponent droplets and bubbles in the canonical ensemble, and compare with single-component systems. The bubbles/droplets are described on the mesoscopic level by square gradient theory. Furthermore, we compare the results to a capillary model which gives a macroscopic description. Remarkably, the solutions of the square gradient model, representing bubbles and droplets, are accurately reproduced by the capillary model except in the vicinity of the spinodals. The solutions of the square gradient model form closed loops, which shows the inherent symmetry and connected nature of bubbles and droplets. A thermodynamic stability analysis is carried out, where the second variation of the square gradient description is compared to the eigenvalues of the Hessian matrix in the capillary description. The analysis shows that it is impossible to stabilize arbitrarily small bubbles or droplets in closed systems and gives insight into metastable regions close to the minimum bubble/droplet radii. Despite the large difference in complexity, the square gradient and the capillary model predict the same finite threshold sizes and very similar stability limits for bubbles and droplets, both for single-component and two-component systems.
Resumo:
Sardinia is the second largest island in the Mediterranean and, together with Corsica and nearby mainland areas, one of the top biodiversity hotspots in the region. The origin of Sardinia traces back to the opening of the western Mediterranean in the late Oligocene. This geological event and the subsequent Messinian Salinity Crisis and Pleistocene glacial cycles have had a major impact on local biodiversity. The Dysdera woodlouse hunter spiders are one of the most diverse ground-dweller groups in the Mediterranean. Here we describe the first two species of this genus endemic to Sardinia: Dysdera jana sp. n. and Dysdera shardana sp. n. The two species show contrasting allopatric distribution: D. jana sp. n. is a narrow endemic while D. shardana sp. n. is distributed throughout most of the island. A multi-gene DNA sequence phylogenetic analys based on mitochondrial and nuclear genes supports the close relationships of the new species to the type species of the genus Dysdera erythrina. Age estimates reject Oligocene origin of the new Dysdera species and identify the Messinian Salinity Crises as the most plausible period for the split between Sardinian endemics and their closest relatives. Phylogeographic analysis reveals deep genetic divergences and population structure in Dysdera shardana sp. n., suggesting that restriction to gene flow probably due to environmental factors could explain local speciation events. Taxonomy, phylogeny, DNA sequencing, Mediterranean biogeography, phylogeography
Resumo:
Tutkimus tarkastelee taloudellisia mallintamismahdollisuuksia metsäteollisuuden liiketoimintayksikössä. Tavoitteena on suunnitella ja luoda taloudellinen malli liiketoimintayksikölle, jonka avulla sen tuloksen analysoiminen ja ennustaminen on mahdollista. Tutkimusta tarkastellaan konstruktiivisen tutkimusmenetelmän avulla. Teoreettinen viitekehys tarkastelee olemassa olevan informaation muotoilemista keskittyen tiedon jalostamisen tarpeisiin, päätöksenteon asettamiin vaatimuksiin sekä mallintamiseen. Toiseksi, teoria esittää informaatiolle asetettavia vaatimuksia organisatorisen ohjauksen näkökulmasta.Empiirinen tieto kerätään osallistuvan havainnoinnin avulla hyödyntäen epävirallisia keskusteluja, tietojärjestelmiä ja laskentatoimen dokumentteja. Tulokset osoittavat, että liikevoiton ennustaminen mallin avulla on vaikeaa, koska taustalla vaikuttavien muuttujien määrä on suuri. Tästä johtuen malli täytyykin rakentaa niin, että se tarkastelee liikevoittoa niin yksityiskohtaisella tasolla kuin mahdollista. Testauksessa mallin tarkkuus osoittautui sitä paremmaksi, mitä tarkemmalla tasolla ennustaminen tapahtui. Lisäksi testaus osoitti, että malli on käyttökelpoinen liiketoiminnan ohjauksessa lyhyellä aikavälillä. Näin se luo myös pohjan pitkän aikavälin ennustamiselle.
Resumo:
1. Species distribution models (SDMs) have become a standard tool in ecology and applied conservation biology. Modelling rare and threatened species is particularly important for conservation purposes. However, modelling rare species is difficult because the combination of few occurrences and many predictor variables easily leads to model overfitting. A new strategy using ensembles of small models was recently developed in an attempt to overcome this limitation of rare species modelling and has been tested successfully for only a single species so far. Here, we aim to test the approach more comprehensively on a large number of species including a transferability assessment. 2. For each species numerous small (here bivariate) models were calibrated, evaluated and averaged to an ensemble weighted by AUC scores. These 'ensembles of small models' (ESMs) were compared to standard Species Distribution Models (SDMs) using three commonly used modelling techniques (GLM, GBM, Maxent) and their ensemble prediction. We tested 107 rare and under-sampled plant species of conservation concern in Switzerland. 3. We show that ESMs performed significantly better than standard SDMs. The rarer the species, the more pronounced the effects were. ESMs were also superior to standard SDMs and their ensemble when they were independently evaluated using a transferability assessment. 4. By averaging simple small models to an ensemble, ESMs avoid overfitting without losing explanatory power through reducing the number of predictor variables. They further improve the reliability of species distribution models, especially for rare species, and thus help to overcome limitations of modelling rare species.
Resumo:
«Quel est l'âge de cette trace digitale?» Cette question est relativement souvent soulevée au tribunal ou lors d'investigations, lorsque la personne suspectée admet avoir laissé ses empreintes digitales sur une scène de crime mais prétend l'avoir fait à un autre moment que celui du crime et pour une raison innocente. Toutefois, aucune réponse ne peut actuellement être donnée à cette question, puisqu'aucune méthodologie n'est pour l'heure validée et acceptée par l'ensemble de la communauté forensique. Néanmoins, l'inventaire de cas américains conduit dans cette recherche a montré que les experts fournissent tout de même des témoignages au tribunal concernant l'âge de traces digitales, même si ceux-‐ci sont majoritairement basés sur des paramètres subjectifs et mal documentés. Il a été relativement aisé d'accéder à des cas américains détaillés, ce qui explique le choix de l'exemple. Toutefois, la problématique de la datation des traces digitales est rencontrée dans le monde entier, et le manque de consensus actuel dans les réponses données souligne la nécessité d'effectuer des études sur le sujet. Le but de la présente recherche est donc d'évaluer la possibilité de développer une méthode de datation objective des traces digitales. Comme les questions entourant la mise au point d'une telle procédure ne sont pas nouvelles, différentes tentatives ont déjà été décrites dans la littérature. Cette recherche les a étudiées de manière critique, et souligne que la plupart des méthodologies reportées souffrent de limitations prévenant leur utilisation pratique. Néanmoins, certaines approches basées sur l'évolution dans le temps de composés intrinsèques aux résidus papillaires se sont montrées prometteuses. Ainsi, un recensement détaillé de la littérature a été conduit afin d'identifier les composés présents dans les traces digitales et les techniques analytiques capables de les détecter. Le choix a été fait de se concentrer sur les composés sébacés détectés par chromatographie gazeuse couplée à la spectrométrie de masse (GC/MS) ou par spectroscopie infrarouge à transformée de Fourier. Des analyses GC/MS ont été menées afin de caractériser la variabilité initiale de lipides cibles au sein des traces digitales d'un même donneur (intra-‐variabilité) et entre les traces digitales de donneurs différents (inter-‐variabilité). Ainsi, plusieurs molécules ont été identifiées et quantifiées pour la première fois dans les résidus papillaires. De plus, il a été déterminé que l'intra-‐variabilité des résidus était significativement plus basse que l'inter-‐variabilité, mais que ces deux types de variabilité pouvaient être réduits en utilisant différents pré-‐ traitements statistiques s'inspirant du domaine du profilage de produits stupéfiants. Il a également été possible de proposer un modèle objectif de classification des donneurs permettant de les regrouper dans deux classes principales en se basant sur la composition initiale de leurs traces digitales. Ces classes correspondent à ce qui est actuellement appelé de manière relativement subjective des « bons » ou « mauvais » donneurs. Le potentiel d'un tel modèle est élevé dans le domaine de la recherche en traces digitales, puisqu'il permet de sélectionner des donneurs représentatifs selon les composés d'intérêt. En utilisant la GC/MS et la FTIR, une étude détaillée a été conduite sur les effets de différents facteurs d'influence sur la composition initiale et le vieillissement de molécules lipidiques au sein des traces digitales. Il a ainsi été déterminé que des modèles univariés et multivariés pouvaient être construits pour décrire le vieillissement des composés cibles (transformés en paramètres de vieillissement par pré-‐traitement), mais que certains facteurs d'influence affectaient ces modèles plus sérieusement que d'autres. En effet, le donneur, le substrat et l'application de techniques de révélation semblent empêcher la construction de modèles reproductibles. Les autres facteurs testés (moment de déposition, pression, température et illumination) influencent également les résidus et leur vieillissement, mais des modèles combinant différentes valeurs de ces facteurs ont tout de même prouvé leur robustesse dans des situations bien définies. De plus, des traces digitales-‐tests ont été analysées par GC/MS afin d'être datées en utilisant certains des modèles construits. Il s'est avéré que des estimations correctes étaient obtenues pour plus de 60 % des traces-‐tests datées, et jusqu'à 100% lorsque les conditions de stockage étaient connues. Ces résultats sont intéressants mais il est impératif de conduire des recherches supplémentaires afin d'évaluer les possibilités d'application de ces modèles dans des cas réels. Dans une perspective plus fondamentale, une étude pilote a également été effectuée sur l'utilisation de la spectroscopie infrarouge combinée à l'imagerie chimique (FTIR-‐CI) afin d'obtenir des informations quant à la composition et au vieillissement des traces digitales. Plus précisément, la capacité de cette technique à mettre en évidence le vieillissement et l'effet de certains facteurs d'influence sur de larges zones de traces digitales a été investiguée. Cette information a ensuite été comparée avec celle obtenue par les spectres FTIR simples. Il en a ainsi résulté que la FTIR-‐CI était un outil puissant, mais que son utilisation dans l'étude des résidus papillaires à des buts forensiques avait des limites. En effet, dans cette recherche, cette technique n'a pas permis d'obtenir des informations supplémentaires par rapport aux spectres FTIR traditionnels et a également montré des désavantages majeurs, à savoir de longs temps d'analyse et de traitement, particulièrement lorsque de larges zones de traces digitales doivent être couvertes. Finalement, les résultats obtenus dans ce travail ont permis la proposition et discussion d'une approche pragmatique afin d'aborder les questions de datation des traces digitales. Cette approche permet ainsi d'identifier quel type d'information le scientifique serait capable d'apporter aux enquêteurs et/ou au tribunal à l'heure actuelle. De plus, le canevas proposé décrit également les différentes étapes itératives de développement qui devraient être suivies par la recherche afin de parvenir à la validation d'une méthodologie de datation des traces digitales objective, dont les capacités et limites sont connues et documentées. -- "How old is this fingermark?" This question is relatively often raised in trials when suspects admit that they have left their fingermarks on a crime scene but allege that the contact occurred at a time different to that of the crime and for legitimate reasons. However, no answer can be given to this question so far, because no fingermark dating methodology has been validated and accepted by the whole forensic community. Nevertheless, the review of past American cases highlighted that experts actually gave/give testimonies in courts about the age of fingermarks, even if mostly based on subjective and badly documented parameters. It was relatively easy to access fully described American cases, thus explaining the origin of the given examples. However, fingermark dating issues are encountered worldwide, and the lack of consensus among the given answers highlights the necessity to conduct research on the subject. The present work thus aims at studying the possibility to develop an objective fingermark dating method. As the questions surrounding the development of dating procedures are not new, different attempts were already described in the literature. This research proposes a critical review of these attempts and highlights that most of the reported methodologies still suffer from limitations preventing their use in actual practice. Nevertheless, some approaches based on the evolution of intrinsic compounds detected in fingermark residue over time appear to be promising. Thus, an exhaustive review of the literature was conducted in order to identify the compounds available in the fingermark residue and the analytical techniques capable of analysing them. It was chosen to concentrate on sebaceous compounds analysed using gas chromatography coupled with mass spectrometry (GC/MS) or Fourier transform infrared spectroscopy (FTIR). GC/MS analyses were conducted in order to characterize the initial variability of target lipids among fresh fingermarks of the same donor (intra-‐variability) and between fingermarks of different donors (inter-‐variability). As a result, many molecules were identified and quantified for the first time in fingermark residue. Furthermore, it was determined that the intra-‐variability of the fingermark residue was significantly lower than the inter-‐variability, but that it was possible to reduce both kind of variability using different statistical pre-‐ treatments inspired from the drug profiling area. It was also possible to propose an objective donor classification model allowing the grouping of donors in two main classes based on their initial lipid composition. These classes correspond to what is relatively subjectively called "good" or "bad" donors. The potential of such a model is high for the fingermark research field, as it allows the selection of representative donors based on compounds of interest. Using GC/MS and FTIR, an in-‐depth study of the effects of different influence factors on the initial composition and aging of target lipid molecules found in fingermark residue was conducted. It was determined that univariate and multivariate models could be build to describe the aging of target compounds (transformed in aging parameters through pre-‐ processing techniques), but that some influence factors were affecting these models more than others. In fact, the donor, the substrate and the application of enhancement techniques seemed to hinder the construction of reproducible models. The other tested factors (deposition moment, pressure, temperature and illumination) also affected the residue and their aging, but models combining different values of these factors still proved to be robust. Furthermore, test-‐fingermarks were analysed with GC/MS in order to be dated using some of the generated models. It turned out that correct estimations were obtained for 60% of the dated test-‐fingermarks and until 100% when the storage conditions were known. These results are interesting but further research should be conducted to evaluate if these models could be used in uncontrolled casework conditions. In a more fundamental perspective, a pilot study was also conducted on the use of infrared spectroscopy combined with chemical imaging in order to gain information about the fingermark composition and aging. More precisely, its ability to highlight influence factors and aging effects over large areas of fingermarks was investigated. This information was then compared with that given by individual FTIR spectra. It was concluded that while FTIR-‐ CI is a powerful tool, its use to study natural fingermark residue for forensic purposes has to be carefully considered. In fact, in this study, this technique does not yield more information on residue distribution than traditional FTIR spectra and also suffers from major drawbacks, such as long analysis and processing time, particularly when large fingermark areas need to be covered. Finally, the results obtained in this research allowed the proposition and discussion of a formal and pragmatic framework to approach the fingermark dating questions. It allows identifying which type of information the scientist would be able to bring so far to investigators and/or Justice. Furthermore, this proposed framework also describes the different iterative development steps that the research should follow in order to achieve the validation of an objective fingermark dating methodology, whose capacities and limits are well known and properly documented.
Resumo:
Despite moderate improvements in outcome of glioblastoma after first-line treatment with chemoradiation recent clinical trials failed to improve the prognosis of recurrent glioblastoma. In the absence of a standard of care we aimed to investigate institutional treatment strategies to identify similarities and differences in the pattern of care for recurrent glioblastoma. We investigated re-treatment criteria and therapeutic pathways for recurrent glioblastoma of eight neuro-oncology centres in Switzerland having an established multidisciplinary tumour-board conference. Decision algorithms, differences and consensus were analysed using the objective consensus methodology. A total of 16 different treatment recommendations were identified based on combinations of eight different decision criteria. The set of criteria implemented as well as the set of treatments offered was different in each centre. For specific situations, up to 6 different treatment recommendations were provided by the eight centres. The only wide-range consensus identified was to offer best supportive care to unfit patients. A majority recommendation was identified for non-operable large early recurrence with unmethylated MGMT promoter status in the fit patients: here bevacizumab was offered. In fit patients with late recurrent non-operable MGMT promoter methylated glioblastoma temozolomide was recommended by most. No other majority recommendations were present. In the absence of strong evidence we identified few consensus recommendations in the treatment of recurrent glioblastoma. This contrasts the limited availability of single drugs and treatment modalities. Clinical situations of greatest heterogeneity may be suitable to be addressed in clinical trials and second opinion referrals are likely to yield diverging recommendations.
Resumo:
The effective notch stress approach for the fatigue strength assessment of welded structures as included in the Fatigue Design Recommendation of the IIW requires the numerical analysis of the elastic notch stress in the weld toe and weld root which is fictitiously rounded with a radius of 1mm. The goal of this thesis work was to consider alternate meshing strategies when using the effective notch stress approach to assess the fatigue strength of load carrying partial penetration fillet-welded cruciform joints. In order to establish guidelines for modeling the joint and evaluating the results, various two-dimensional (2D) finite element analyses were carried out by systematically varying the thickness of the plates, the weld throat thickness, the degree of bending, and the shape and location of the modeled effective notch. To extend the scope of this work, studies were also carried out on the influence of
Resumo:
The objective of this study is to show that bone strains due to dynamic mechanical loading during physical activity can be analysed using the flexible multibody simulation approach. Strains within the bone tissue play a major role in bone (re)modeling. Based on previous studies, it has been shown that dynamic loading seems to be more important for bone (re)modeling than static loading. The finite element method has been used previously to assess bone strains. However, the finite element method may be limited to static analysis of bone strains due to the expensive computation required for dynamic analysis, especially for a biomechanical system consisting of several bodies. Further, in vivo implementation of strain gauges on the surfaces of bone has been used previously in order to quantify the mechanical loading environment of the skeleton. However, in vivo strain measurement requires invasive methodology, which is challenging and limited to certain regions of superficial bones only, such as the anterior surface of the tibia. In this study, an alternative numerical approach to analyzing in vivo strains, based on the flexible multibody simulation approach, is proposed. In order to investigate the reliability of the proposed approach, three 3-dimensional musculoskeletal models where the right tibia is assumed to be flexible, are used as demonstration examples. The models are employed in a forward dynamics simulation in order to predict the tibial strains during walking on a level exercise. The flexible tibial model is developed using the actual geometry of the subject’s tibia, which is obtained from 3 dimensional reconstruction of Magnetic Resonance Images. Inverse dynamics simulation based on motion capture data obtained from walking at a constant velocity is used to calculate the desired contraction trajectory for each muscle. In the forward dynamics simulation, a proportional derivative servo controller is used to calculate each muscle force required to reproduce the motion, based on the desired muscle contraction trajectory obtained from the inverse dynamics simulation. Experimental measurements are used to verify the models and check the accuracy of the models in replicating the realistic mechanical loading environment measured from the walking test. The predicted strain results by the models show consistency with literature-based in vivo strain measurements. In conclusion, the non-invasive flexible multibody simulation approach may be used as a surrogate for experimental bone strain measurement, and thus be of use in detailed strain estimation of bones in different applications. Consequently, the information obtained from the present approach might be useful in clinical applications, including optimizing implant design and devising exercises to prevent bone fragility, accelerate fracture healing and reduce osteoporotic bone loss.
Resumo:
Temporary streams are those water courses that undergo the recurrent cessation of flow or the complete drying of their channel. The structure and composition of biological communities in temporary stream reaches are strongly dependent on the temporal changes of the aquatic habitats determined by the hydrological conditions. Therefore, the structural and functional characteristics of aquatic fauna to assess the ecological quality of a temporary stream reach cannot be used without taking into account the controls imposed by the hydrological regime. This paper develops methods for analysing temporary streams' aquatic regimes, based on the definition of six aquatic states that summarize the transient sets of mesohabitats occurring on a given reach at a particular moment, depending on the hydrological conditions: Hyperrheic, Eurheic, Oligorheic, Arheic, Hyporheic and Edaphic. When the hydrological conditions lead to a change in the aquatic state, the structure and composition of the aquatic community changes according to the new set of available habitats. We used the water discharge records from gauging stations or simulations with rainfall-runoff models to infer the temporal patterns of occurrence of these states in the Aquatic States Frequency Graph we developed. The visual analysis of this graph is complemented by the development of two metrics which describe the permanence of flow and the seasonal predictability of zero flow periods. Finally, a classification of temporary streams in four aquatic regimes in terms of their influence over the development of aquatic life is updated from the existing classifications, with stream aquatic regimes defined as Permanent, Temporary-pools, Temporary-dry and Episodic. While aquatic regimes describe the long-term overall variability of the hydrological conditions of the river section and have been used for many years by hydrologists and ecologists, aquatic states describe the availability of mesohabitats in given periods that determine the presence of different biotic assemblages. This novel concept links hydrological and ecological conditions in a unique way. All these methods were implemented with data from eight temporary streams around the Mediterranean within the MIRAGE project. Their application was a precondition to assessing the ecological quality of these streams.
Resumo:
Objectifs: Dans certains pays, les cigarettes électroniques contenant de la nicotine (e-cigarettes) sont considérées comme des produits de consommation courante, sans régulation spécifique. Dans d'autres (comme en Suisse), la vente d'e-cigarettes contenant de la nicotine est interdite, malgré l'importante demande de nombreux fumeurs de pouvoir les obtenir. Au vu du manque de données scientifiques sur l'efficacité et la sécurité à long-terme de ces produits, les spécialistes de la lutte contre le tabagisme se trouvent divisés sur la question de leur régulation. Afin d'obtenir un consensus parmi ces experts que nous puissions transmettre aux autorités sanitaires, nous avons réalisé une étude d'avis d'experts sur le plan national. Méthode : Nous avons utilisé une méthodologie Delphi, à l'aide de questionnaires électroniques, afin de synthétiser l'opinion d'experts suisses sur la question de la cigarette électronique. Participants : 40 experts suisses représentant l'ensemble de la Suisse. Mesures : Nous avons mesuré le degré de consensus entre les experts au sujet de recommandations touchant à la régulation, la vente et l'utilisation de l'e-cigarette contenant de la nicotine, ainsi que leur opinion générale sur le produit. De nouvelles recommandations et déclarations ont été formulées en tenant compte des réponses et des commentaires des participants. Résultats : Un consensus entre les experts a établi que l'e-cigarette contenant de la nicotine devrait être accessible en Suisse, mais seulement dans des conditions spécifiques. La vente devrait être réservée aux adultes, en utilisant des standards de qualité, une limite de concentration maximale de nicotine, et être accompagnée d'une liste d'ingrédients autorisés. La publicité devrait être restreinte et l'utilisation de l'e- cigarette devrait être interdite dans les lieux publics. Conclusions : Ces recommandations permettent de regrouper trois principes : 1) le principe de réalité, étant donné que le produit est déjà disponible sur le marché ; 2) le principe de prévention, puisque l'e- cigarette procure une alternative au tabac pour les fumeurs actuels, et 3) le principe de précaution, afin de protéger les mineurs et les non-fumeurs, étant donné que les effets à long-terme ne sont pas encore connus. Les autorités suisses devraient mettre en place une législation spécifique afin d'autoriser l'e- cigarette contenant de la nicotine.
Resumo:
To date, for most biological and physiological phenomena, the scientific community has reach a consensus on their related function, except for sleep, which has an undetermined, albeit mystery, function. To further our understanding of sleep function(s), we first focused on the level of complexity at which sleep-like phenomenon can be observed. This lead to the development of an in vitro model. The second approach was to understand the molecular and cellular pathways regulating sleep and wakefulness, using both our in vitro and in vivo models. The third approach (ongoing) is to look across evolution when sleep or wakefulness appears. (1) To address the question as to whether sleep is a cellular property and how this is linked to the entire brain functioning, we developed a model of sleep in vitro by using dissociated primary cortical cultures. We aimed at simulating the major characteristics of sleep and wakefulness in vitro. We have shown that mature cortical cultures display a spontaneous electrical activity similar to sleep. When these cultures are stimulated by waking neurotransmitters, they show a tonic firing activity, similar to wakefulness, but return spontaneously to the "sleep-like" state 24h after stimulation. We have also shown that transcriptional, electrophysiological, and metabolic correlates of sleep and wakefulness can be reliably detected in dissociated cortical cultures. (2) To further understand at which molecular and cellular levels changes between sleep and wakefulness occur, we have used a pharmacological and systematic gene transcription approach in vitro and discovered a major role played by the Erk pathway. Indeed, pharmacological inhibition of this pathway in living animals decreased sleep by 2 hours per day and consolidated both sleep and wakefulness by reducing their fragmentation. (3) Finally, we tried to evaluate the presence of sleep in one of the most primitive species with a neural network. We set up Hydra as a model organism. We hypothesized that sleep as a cellular (neuronal) property may occur with the appearance of the most primitive nervous system. We were able to show that Hydra have periodic rest phases amounting to up to 5 hours per day. In conclusion, our work established an in vitro model to study sleep, discovered one of the major signaling pathways regulating vigilance states, and strongly suggests that sleep is a cellular property highly conserved at the molecular level during evolution. -- Jusqu'à ce jour, la communauté scientifique s'est mise d'accord sur la fonction d'une majorité des processus physiologiques, excepté pour le sommeil. En effet, la fonction du sommeil reste un mystère, et aucun consensus n'est atteint le concernant. Pour mieux comprendre la ou les fonctions du sommeil, (1) nous nous sommes d'abord concentré sur le niveau de complexité auquel un état ressemblant au sommeil peut être observé. Nous avons ainsi développé un modèle du sommeil in vitro, (2) nous avons disséqué les mécanismes moléculaires et cellulaires qui pourraient réguler le sommeil, (3) nous avons cherché à savoir si un état de sommeil peut être trouvé dans l'hydre, l'animal le plus primitif avec un système nerveux. (1) Pour répondre à la question de savoir à quel niveau de complexité apparaît un état de sommeil ou d'éveil, nous avons développé un modèle du sommeil, en utilisant des cellules dissociées de cortex. Nous avons essayé de reproduire les corrélats du sommeil et de l'éveil in vitro. Pour ce faire, nous avons développé des cultures qui montrent les signes électrophysiologiques du sommeil, puis quand stimulées chimiquement passent à un état proche de l'éveil et retournent dans un état de sommeil 24 heures après la stimulation. Notre modèle n'est pas parfait, mais nous avons montré que nous pouvions obtenir les corrélats électrophysiologiques, transcriptionnels et métaboliques du sommeil dans des cellules corticales dissociées. (2) Pour mieux comprendre ce qui se passe au niveau moléculaire et cellulaire durant les différents états de vigilance, nous avons utilisé ce modèle in vitro pour disséquer les différentes voies de signalisation moléculaire. Nous avons donc bloqué pharmacologiquement les voies majeures. Nous avons mis en évidence la voie Erkl/2 qui joue un rôle majeur dans la régulation du sommeil et dans la transcription des gènes qui corrèlent avec le cycle veille-sommeil. En effet, l'inhibition pharmacologique de cette voie chez la souris diminue de 2 heures la quantité du sommeil journalier et consolide l'éveil et le sommeil en diminuant leur fragmentation. (3) Finalement, nous avons cherché la présence du sommeil chez l'Hydre. Pour cela, nous avons étudié le comportement de l'Hydre pendant 24-48h et montrons que des périodes d'inactivité, semblable au sommeil, sont présentes dans cette espèce primitive. L'ensemble de ces travaux indique que le sommeil est une propriété cellulaire, présent chez tout animal avec un système nerveux et régulé par une voie de signalisation phylogénétiquement conservée.