224 resultados para Acceptance models
em Université de Lausanne, Switzerland
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
Abiotic factors are considered strong drivers of species distribution and assemblages. Yet these spatial patterns are also influenced by biotic interactions. Accounting for competitors or facilitators may improve both the fit and the predictive power of species distribution models (SDMs). We investigated the influence of a dominant species, Empetrum nigrum ssp. hermaphroditum, on the distribution of 34 subordinate species in the tundra of northern Norway. We related SDM parameters of those subordinate species to their functional traits and their co-occurrence patterns with E. hermaphroditum across three spatial scales. By combining both approaches, we sought to understand whether these species may be limited by competitive interactions and/or benefit from habitat conditions created by the dominant species. The model fit and predictive power increased for most species when the frequency of occurrence of E. hermaphroditum was included in the SDMs as a predictor. The largest increase was found for species that 1) co-occur most of the time with E. hermaphroditum, both at large (i.e. 750 m) and small spatial scale (i.e. 2 m) or co-occur with E. hermaphroditum at large scale but not at small scale and 2) have particularly low or high leaf dry matter content (LDMC). Species that do not co-occur with E. hermaphroditum at the smallest scale are generally palatable herbaceous species with low LDMC, thus showing a weak ability to tolerate resource depletion that is directly or indirectly induced by E. hermaphroditum. Species with high LDMC, showing a better aptitude to face resource depletion and grazing, are often found in the proximity of E. hermaphroditum. Our results are consistent with previous findings that both competition and facilitation structure plant distribution and assemblages in the Arctic tundra. The functional and co-occurrence approaches used were complementary and provided a deeper understanding of the observed patterns by refinement of the pool of potential direct and indirect ecological effects of E. hermaphroditum on the distribution of subordinate species. Our correlative study would benefit being complemented by experimental approaches.
Resumo:
BACKGROUND: Even if a large proportion of physiotherapists work in the private sector worldwide, very little is known of the organizations within which they practice. Such knowledge is important to help understand contexts of practice and how they influence the quality of services and patient outcomes. The purpose of this study was to: 1) describe characteristics of organizations where physiotherapists practice in the private sector, and 2) explore the existence of a taxonomy of organizational models. METHODS: This was a cross-sectional quantitative survey of 236 randomly-selected physiotherapists. Participants completed a purpose-designed questionnaire online or by telephone, covering organizational vision, resources, structures and practices. Organizational characteristics were analyzed descriptively, while organizational models were identified by multiple correspondence analyses. RESULTS: Most organizations were for-profit (93.2%), located in urban areas (91.5%), and within buildings containing multiple businesses/organizations (76.7%). The majority included multiple providers (89.8%) from diverse professions, mainly physiotherapy assistants (68.7%), massage therapists (67.3%) and osteopaths (50.2%). Four organizational models were identified: 1) solo practice, 2) middle-scale multiprovider, 3) large-scale multiprovider and 4) mixed. CONCLUSIONS: The results of this study provide a detailed description of the organizations where physiotherapists practice, and highlight the importance of human resources in differentiating organizational models. Further research examining the influences of these organizational characteristics and models on outcomes such as physiotherapists' professional practices and patient outcomes are needed.
Resumo:
Among the largest resources for biological sequence data is the large amount of expressed sequence tags (ESTs) available in public and proprietary databases. ESTs provide information on transcripts but for technical reasons they often contain sequencing errors. Therefore, when analyzing EST sequences computationally, such errors must be taken into account. Earlier attempts to model error prone coding regions have shown good performance in detecting and predicting these while correcting sequencing errors using codon usage frequencies. In the research presented here, we improve the detection of translation start and stop sites by integrating a more complex mRNA model with codon usage bias based error correction into one hidden Markov model (HMM), thus generalizing this error correction approach to more complex HMMs. We show that our method maintains the performance in detecting coding sequences.
Resumo:
A large fraction of genome variation between individuals is comprised of submicroscopic copy number variation of genomic DNA segments. We assessed the relative contribution of structural changes and gene dosage alterations on phenotypic outcomes with mouse models of Smith-Magenis and Potocki-Lupski syndromes. We phenotyped mice with 1n (Deletion/+), 2n (+/+), 3n (Duplication/+), and balanced 2n compound heterozygous (Deletion/Duplication) copies of the same region. Parallel to the observations made in humans, such variation in gene copy number was sufficient to generate phenotypic consequences: in a number of cases diametrically opposing phenotypes were associated with gain versus loss of gene content. Surprisingly, some neurobehavioral traits were not rescued by restoration of the normal gene copy number. Transcriptome profiling showed that a highly significant propensity of transcriptional changes map to the engineered interval in the five assessed tissues. A statistically significant overrepresentation of the genes mapping to the entire length of the engineered chromosome was also found in the top-ranked differentially expressed genes in the mice containing rearranged chromosomes, regardless of the nature of the rearrangement, an observation robust across different cell lineages of the central nervous system. Our data indicate that a structural change at a given position of the human genome may affect not only locus and adjacent gene expression but also "genome regulation." Furthermore, structural change can cause the same perturbation in particular pathways regardless of gene dosage. Thus, the presence of a genomic structural change, as well as gene dosage imbalance, contributes to the ultimate phenotype.
Resumo:
Divergent and convergent margins actualistic models are reviewed and applied to the history of the western Alps. Tethyan rifting history and geometry are analyzed: the northern European margin is considered as an upper plate whereas the southern Apulian margin is a lower plate; the Breche basin is regarded as the former break-away trough; the internal Brianconnais domain represents the northern rift shoulder whilst the more external domains are regarded as the infill of a complex rim basin locally affected by important extension (Valaisan and Vocontain trough). The Schistes lustres and ophiolites of the Tsate nappe are compared to an accretionary prism: the imbrication of this nappe elements is regarded as a direct consequence of the accretionary phenomena already active in early Cretaceous; the Gets/Simme complex could orginate from a more internal part of the accretionary prism. Some eclogitic basements represent the former Apulian margin substratum (Sesia) others (Mont-Rose) are interpreted as the former edge of the European margin. The history of the closing Tethyan domain is analyzed and the remaining problems concerning the cinematics, the presence/absence of a volcanic arc and the eoalpine metamorphism are discussed.
Resumo:
Difficult tracheal intubation assessment is an important research topic in anesthesia as failed intubations are important causes of mortality in anesthetic practice. The modified Mallampati score is widely used, alone or in conjunction with other criteria, to predict the difficulty of intubation. This work presents an automatic method to assess the modified Mallampati score from an image of a patient with the mouth wide open. For this purpose we propose an active appearance models (AAM) based method and use linear support vector machines (SVM) to select a subset of relevant features obtained using the AAM. This feature selection step proves to be essential as it improves drastically the performance of classification, which is obtained using SVM with RBF kernel and majority voting. We test our method on images of 100 patients undergoing elective surgery and achieve 97.9% accuracy in the leave-one-out crossvalidation test and provide a key element to an automatic difficult intubation assessment system.
Resumo:
1. Model-based approaches have been used increasingly in conservation biology over recent years. Species presence data used for predictive species distribution modelling are abundant in natural history collections, whereas reliable absence data are sparse, most notably for vagrant species such as butterflies and snakes. As predictive methods such as generalized linear models (GLM) require absence data, various strategies have been proposed to select pseudo-absence data. However, only a few studies exist that compare different approaches to generating these pseudo-absence data. 2. Natural history collection data are usually available for long periods of time (decades or even centuries), thus allowing historical considerations. However, this historical dimension has rarely been assessed in studies of species distribution, although there is great potential for understanding current patterns, i.e. the past is the key to the present. 3. We used GLM to model the distributions of three 'target' butterfly species, Melitaea didyma, Coenonympha tullia and Maculinea teleius, in Switzerland. We developed and compared four strategies for defining pools of pseudo-absence data and applied them to natural history collection data from the last 10, 30 and 100 years. Pools included: (i) sites without target species records; (ii) sites where butterfly species other than the target species were present; (iii) sites without butterfly species but with habitat characteristics similar to those required by the target species; and (iv) a combination of the second and third strategies. Models were evaluated and compared by the total deviance explained, the maximized Kappa and the area under the curve (AUC). 4. Among the four strategies, model performance was best for strategy 3. Contrary to expectations, strategy 2 resulted in even lower model performance compared with models with pseudo-absence data simulated totally at random (strategy 1). 5. Independent of the strategy model, performance was enhanced when sites with historical species presence data were not considered as pseudo-absence data. Therefore, the combination of strategy 3 with species records from the last 100 years achieved the highest model performance. 6. Synthesis and applications. The protection of suitable habitat for species survival or reintroduction in rapidly changing landscapes is a high priority among conservationists. Model-based approaches offer planning authorities the possibility of delimiting priority areas for species detection or habitat protection. The performance of these models can be enhanced by fitting them with pseudo-absence data relying on large archives of natural history collection species presence data rather than using randomly sampled pseudo-absence data.
Resumo:
Background Alzheimer's disease (AD) is the leading form of dementia worldwide. The Aß-peptide is believed to be the major pathogenic compound of the disease. Since several years it is hypothesized that Aß impacts the Wnt signaling cascade and therefore activation of this signaling pathway is proposed to rescue the neurotoxic effect of Aß. Findings Expression of the human Aß42 in the Drosophila nervous system leads to a drastically shortened life span. We found that the action of Aß42 specifically in the glutamatergic motoneurons is responsible for the reduced survival. However, we find that the morphology of the glutamatergic larval neuromuscular junctions, which are widely used as the model for mammalian central nervous system synapses, is not affected by Aß42 expression. We furthermore demonstrate that genetic activation of the Wnt signal transduction pathway in the nervous system is not able to rescue the shortened life span or a rough eye phenotype in Drosophila. Conclusions Our data confirm that the life span is a useful readout of Aß42 induced neurotoxicity in Drosophila; the neuromuscular junction seems however not to be an appropriate model to study AD in flies. Additionally, our results challenge the hypothesis that Wnt signaling might be implicated in Aß42 toxicity and might serve as a drug target against AD.
Resumo:
BACKGROUND: Zebrafish is a clinically-relevant model of heart regeneration. Unlike mammals, it has a remarkable heart repair capacity after injury, and promises novel translational applications. Amputation and cryoinjury models are key research tools for understanding injury response and regeneration in vivo. An understanding of the transcriptional responses following injury is needed to identify key players of heart tissue repair, as well as potential targets for boosting this property in humans. RESULTS: We investigated amputation and cryoinjury in vivo models of heart damage in the zebrafish through unbiased, integrative analyses of independent molecular datasets. To detect genes with potential biological roles, we derived computational prediction models with microarray data from heart amputation experiments. We focused on a top-ranked set of genes highly activated in the early post-injury stage, whose activity was further verified in independent microarray datasets. Next, we performed independent validations of expression responses with qPCR in a cryoinjury model. Across in vivo models, the top candidates showed highly concordant responses at 1 and 3 days post-injury, which highlights the predictive power of our analysis strategies and the possible biological relevance of these genes. Top candidates are significantly involved in cell fate specification and differentiation, and include heart failure markers such as periostin, as well as potential new targets for heart regeneration. For example, ptgis and ca2 were overexpressed, while usp2a, a regulator of the p53 pathway, was down-regulated in our in vivo models. Interestingly, a high activity of ptgis and ca2 has been previously observed in failing hearts from rats and humans. CONCLUSIONS: We identified genes with potential critical roles in the response to cardiac damage in the zebrafish. Their transcriptional activities are reproducible in different in vivo models of cardiac injury.
Resumo:
Isolated cytostatic lung perfusion (ILP) is an attractive technique allowing delivery of a high-dose of cytostatic agents to the lungs while limiting systemic toxicity. In developing a rat model of ILP, we have analysed the effect of the route of tumour cell injection on the source of tumour vessels. Pulmonary sarcomas were established by injecting a sarcoma cell suspension either by the intravenous (i.v.) route or directly into the lung parenchyma. Ink perfusion through either pulmonary artery (PA) or bronchial arteries (BA) was performed and the characteristics of the tumour deposits defined. i.v. and direct injection methods induced pulmonary sarcoma nodules, with similar histological features. The intraparenchymal injection of tumour cells resulted in more reliable and reproducible tumour growth and was associated with a longer survival of the animals. i.v. injected tumours developed a PA-derived vascular tree whereas directly injected tumours developed a BA-derived vasculature.
Resumo:
The authors investigated the dimensionality of the French version of the Rosenberg Self-Esteem Scale (RSES; Rosenberg, 1965) using confirmatory factor analysis. We tested models of 1 or 2 factors. Results suggest the RSES is a 1-dimensional scale with 3 highly correlated items. Comparison with the Revised NEO-Personality Inventory (NEO-PI-R; Costa, McCrae, & Rolland, 1998) demonstrated that Neuroticism correlated strongly and Extraversion and Conscientiousness moderately with the RSES. Depression accounted for 47% of the variance of the RSES. Other NEO-PI-R facets were also moderately related with self-esteem.