74 resultados para all substring common subsequence problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: In children with cystic fibrosis (CF), low immunoglobulin (IgG) levels have been reported to be associated with significantly less severe lung disease. However, decreased IgG can be a sign for common variable immunodeficiency (CVID) and affect clinical outcome. The aim of this study was to analyze clinical and serological data of patients having low IgG levels in routine blood tests at annual assessment, particularly their antibody response to polysaccharide antigens. Method: Retrospective chart review of demographic data of CF patients followed at the pediatric CF clinic throughout 2009. Clinical parameters (genotype, pancreas sufficiency, FEV1), presence of Pseudomonas aeruginosa (PA) and number of exacerbations per year were correlated with immunoglobulin and vaccination antibodies levels (antibodies to pneumococcal serotypes 14, 19, 23, 1, 5 and 7F measured by enzyme-linked immune-sorbent assay). Results: 4 out of 60 patients (6.7%) had lower IgG-levels for age. Ages ranged from 1 year 8 months to 11 years, 2 boys, 2 girls. Three patients were delF508 homozygotes, one heterozygote composite delF508/G542X. All were pancreatic insufficient. FEV1 ranged from 74 to 108%. One patient never had colonization by PA, 2 had intermittent PA colonization and one was chronically infected. After conjugated vaccination all patients had protective antibodies against serotypes 14, 19, 23F. For serotypes not included in the vaccine, only one patient had protective titers for 1 out of 3 serotypes. None of the patients had received unconjugated pneumococcal vaccine. There was no significant clinical difference in FEV1, PA colonization or number of exacerbations according to IgG and vaccination antibody levels. Conclusion: Cystic Fibrosis patients with low immunoglobulin levels have normal antibody response to protein antigens. However, despite recurrent infections, there seems to be delayed or deficient antibody response to polysaccharide antigens. Prospective studies are needed to evaluate the development of polysaccharide antibody responses in CF-patients to monitor for CVID. With early detection of CF by newborn screening program, long term follow up could be started early in childhood.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At a time when disciplined inference and decision making under uncertainty represent common aims to participants in legal proceedings, the scientific community is remarkably heterogenous in its attitudes as to how these goals ought to be achieved. Probability and decision theory exert a considerable influence, and we think by all reason rightly do so, but they go against a mainstream of thinking that does not embrace-or is not aware of-the 'normative' character of this body of theory. It is normative, in the sense understood in this article, in that it prescribes particular properties, typically (logical) coherence, to which reasoning and decision making ought to conform. Disregarding these properties can result in diverging views which are occasionally used as an argument against the theory, or as a pretext for not following it. Typical examples are objections according to which people, both in everyday life but also individuals involved at various levels in the judicial process, find the theory difficult to understand and to apply. A further objection is that the theory does not reflect how people actually behave. This article aims to point out in what sense these examples misinterpret the analytical framework in its normative perspective. Through examples borrowed mostly from forensic science contexts, it is argued that so-called intuitive scientific attitudes are particularly liable to such misconceptions. These attitudes are contrasted with a statement of the actual liberties and constraints of probability and decision theory and the view according to which this theory is normative.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

3 Summary 3. 1 English The pharmaceutical industry has been facing several challenges during the last years, and the optimization of their drug discovery pipeline is believed to be the only viable solution. High-throughput techniques do participate actively to this optimization, especially when complemented by computational approaches aiming at rationalizing the enormous amount of information that they can produce. In siiico techniques, such as virtual screening or rational drug design, are now routinely used to guide drug discovery. Both heavily rely on the prediction of the molecular interaction (docking) occurring between drug-like molecules and a therapeutically relevant target. Several softwares are available to this end, but despite the very promising picture drawn in most benchmarks, they still hold several hidden weaknesses. As pointed out in several recent reviews, the docking problem is far from being solved, and there is now a need for methods able to identify binding modes with a high accuracy, which is essential to reliably compute the binding free energy of the ligand. This quantity is directly linked to its affinity and can be related to its biological activity. Accurate docking algorithms are thus critical for both the discovery and the rational optimization of new drugs. In this thesis, a new docking software aiming at this goal is presented, EADock. It uses a hybrid evolutionary algorithm with two fitness functions, in combination with a sophisticated management of the diversity. EADock is interfaced with .the CHARMM package for energy calculations and coordinate handling. A validation was carried out on 37 crystallized protein-ligand complexes featuring 11 different proteins. The search space was defined as a sphere of 15 R around the center of mass of the ligand position in the crystal structure, and conversely to other benchmarks, our algorithms was fed with optimized ligand positions up to 10 A root mean square deviation 2MSD) from the crystal structure. This validation illustrates the efficiency of our sampling heuristic, as correct binding modes, defined by a RMSD to the crystal structure lower than 2 A, were identified and ranked first for 68% of the complexes. The success rate increases to 78% when considering the five best-ranked clusters, and 92% when all clusters present in the last generation are taken into account. Most failures in this benchmark could be explained by the presence of crystal contacts in the experimental structure. EADock has been used to understand molecular interactions involved in the regulation of the Na,K ATPase, and in the activation of the nuclear hormone peroxisome proliferatoractivated receptors a (PPARa). It also helped to understand the action of common pollutants (phthalates) on PPARy, and the impact of biotransformations of the anticancer drug Imatinib (Gleevec®) on its binding mode to the Bcr-Abl tyrosine kinase. Finally, a fragment-based rational drug design approach using EADock was developed, and led to the successful design of new peptidic ligands for the a5ß1 integrin, and for the human PPARa. In both cases, the designed peptides presented activities comparable to that of well-established ligands such as the anticancer drug Cilengitide and Wy14,643, respectively. 3.2 French Les récentes difficultés de l'industrie pharmaceutique ne semblent pouvoir se résoudre que par l'optimisation de leur processus de développement de médicaments. Cette dernière implique de plus en plus. de techniques dites "haut-débit", particulièrement efficaces lorsqu'elles sont couplées aux outils informatiques permettant de gérer la masse de données produite. Désormais, les approches in silico telles que le criblage virtuel ou la conception rationnelle de nouvelles molécules sont utilisées couramment. Toutes deux reposent sur la capacité à prédire les détails de l'interaction moléculaire entre une molécule ressemblant à un principe actif (PA) et une protéine cible ayant un intérêt thérapeutique. Les comparatifs de logiciels s'attaquant à cette prédiction sont flatteurs, mais plusieurs problèmes subsistent. La littérature récente tend à remettre en cause leur fiabilité, affirmant l'émergence .d'un besoin pour des approches plus précises du mode d'interaction. Cette précision est essentielle au calcul de l'énergie libre de liaison, qui est directement liée à l'affinité du PA potentiel pour la protéine cible, et indirectement liée à son activité biologique. Une prédiction précise est d'une importance toute particulière pour la découverte et l'optimisation de nouvelles molécules actives. Cette thèse présente un nouveau logiciel, EADock, mettant en avant une telle précision. Cet algorithme évolutionnaire hybride utilise deux pressions de sélections, combinées à une gestion de la diversité sophistiquée. EADock repose sur CHARMM pour les calculs d'énergie et la gestion des coordonnées atomiques. Sa validation a été effectuée sur 37 complexes protéine-ligand cristallisés, incluant 11 protéines différentes. L'espace de recherche a été étendu à une sphère de 151 de rayon autour du centre de masse du ligand cristallisé, et contrairement aux comparatifs habituels, l'algorithme est parti de solutions optimisées présentant un RMSD jusqu'à 10 R par rapport à la structure cristalline. Cette validation a permis de mettre en évidence l'efficacité de notre heuristique de recherche car des modes d'interactions présentant un RMSD inférieur à 2 R par rapport à la structure cristalline ont été classés premier pour 68% des complexes. Lorsque les cinq meilleures solutions sont prises en compte, le taux de succès grimpe à 78%, et 92% lorsque la totalité de la dernière génération est prise en compte. La plupart des erreurs de prédiction sont imputables à la présence de contacts cristallins. Depuis, EADock a été utilisé pour comprendre les mécanismes moléculaires impliqués dans la régulation de la Na,K ATPase et dans l'activation du peroxisome proliferatoractivated receptor a (PPARa). Il a également permis de décrire l'interaction de polluants couramment rencontrés sur PPARy, ainsi que l'influence de la métabolisation de l'Imatinib (PA anticancéreux) sur la fixation à la kinase Bcr-Abl. Une approche basée sur la prédiction des interactions de fragments moléculaires avec protéine cible est également proposée. Elle a permis la découverte de nouveaux ligands peptidiques de PPARa et de l'intégrine a5ß1. Dans les deux cas, l'activité de ces nouveaux peptides est comparable à celles de ligands bien établis, comme le Wy14,643 pour le premier, et le Cilengitide (PA anticancéreux) pour la seconde.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intracellular bacteria are common causes of community-acquired pneumonia that grow poorly or not at all on standard culture media and do not respond to beta-lactam antibiotic therapy. Apart from well-established agents of pneumonia such as Legionella pneumophila, Mycoplasma pneumoniae, Chlamydia pneumoniae, Chlamydia psittaci and Coxiella burnetii, some new emerging pathogens have recently been recognized, mainly Parachlamydia acanthamoebae and Simkania negevensis, two Chlamydia-related bacteria. Most of them are causes of benign and self-limited infections. However, they may cause severe pneumonia in some cases (i.e., Legionnaires' disease) and they may cause outbreaks representing a public health problem deserving prompt recognition and appropriate therapy. Although extrapulmonary manifestations are often present, no clinical features allow them to be distinguished from classical bacterial agents of pneumonia such as Streptococcus pneumoniae. Thus, specific molecular diagnostic tools are very helpful for early recognition of the offending bacteria, whereas serology often only allows retrospective or late diagnosis. Macrolides remain the best empirical treatment of intracellular respiratory pathogens, although some observational studies suggest that quinolones may be superior for the treatment of legionellosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Chronic kidney disease is associated with cardiovascular disease. We tested for evidence of a shared genetic basis to these traits. STUDY DESIGN: We conducted 2 targeted analyses. First, we examined whether known single-nucleotide polymorphisms (SNPs) underpinning kidney traits were associated with a series of vascular phenotypes. Additionally, we tested whether vascular SNPs were associated with markers of kidney damage. Significance was set to 1.5×10(-4) (0.05/325 tests). SETTING & PARTICIPANTS: Vascular outcomes were analyzed in participants from the AortaGen (20,634), CARDIoGRAM (86,995), CHARGE Eye (15,358), CHARGE IMT (31,181), ICBP (69,395), and NeuroCHARGE (12,385) consortia. Tests for kidney outcomes were conducted in up to 67,093 participants from the CKDGen consortium. PREDICTOR: We used 19 kidney SNPs and 64 vascular SNPs. OUTCOMES & MEASUREMENTS: Vascular outcomes tested were blood pressure, coronary artery disease, carotid intima-media thickness, pulse wave velocity, retinal venular caliber, and brain white matter lesions. Kidney outcomes were estimated glomerular filtration rate and albuminuria. RESULTS: In general, we found that kidney disease variants were not associated with vascular phenotypes (127 of 133 tests were nonsignificant). The one exception was rs653178 near SH2B3 (SH2B adaptor protein 3), which showed direction-consistent association with systolic (P = 9.3 ×10(-10)) and diastolic (P = 1.6 ×10(-14)) blood pressure and coronary artery disease (P = 2.2 ×10(-6)), all previously reported. Similarly, the 64 SNPs associated with vascular phenotypes were not associated with kidney phenotypes (187 of 192 tests were nonsignificant), with the exception of 2 high-correlated SNPs at the SH2B3 locus (P = 1.06 ×10(-07) and P = 7.05 ×10(-08)). LIMITATIONS: The combined effect size of the SNPs for kidney and vascular outcomes may be too low to detect shared genetic associations. CONCLUSIONS: Overall, although we confirmed one locus (SH2B3) as associated with both kidney and cardiovascular disease, our primary findings suggest that there is little overlap between kidney and cardiovascular disease risk variants in the overall population. The reciprocal risks of kidney and cardiovascular disease may not be genetically mediated, but rather a function of the disease milieu itself.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Combinatorial optimization involves finding an optimal solution in a finite set of options; many everyday life problems are of this kind. However, the number of options grows exponentially with the size of the problem, such that an exhaustive search for the best solution is practically infeasible beyond a certain problem size. When efficient algorithms are not available, a practical approach to obtain an approximate solution to the problem at hand, is to start with an educated guess and gradually refine it until we have a good-enough solution. Roughly speaking, this is how local search heuristics work. These stochastic algorithms navigate the problem search space by iteratively turning the current solution into new candidate solutions, guiding the search towards better solutions. The search performance, therefore, depends on structural aspects of the search space, which in turn depend on the move operator being used to modify solutions. A common way to characterize the search space of a problem is through the study of its fitness landscape, a mathematical object comprising the space of all possible solutions, their value with respect to the optimization objective, and a relationship of neighborhood defined by the move operator. The landscape metaphor is used to explain the search dynamics as a sort of potential function. The concept is indeed similar to that of potential energy surfaces in physical chemistry. Borrowing ideas from that field, we propose to extend to combinatorial landscapes the notion of the inherent network formed by energy minima in energy landscapes. In our case, energy minima are the local optima of the combinatorial problem, and we explore several definitions for the network edges. At first, we perform an exhaustive sampling of local optima basins of attraction, and define weighted transitions between basins by accounting for all the possible ways of crossing the basins frontier via one random move. Then, we reduce the computational burden by only counting the chances of escaping a given basin via random kick moves that start at the local optimum. Finally, we approximate network edges from the search trajectory of simple search heuristics, mining the frequency and inter-arrival time with which the heuristic visits local optima. Through these methodologies, we build a weighted directed graph that provides a synthetic view of the whole landscape, and that we can characterize using the tools of complex networks science. We argue that the network characterization can advance our understanding of the structural and dynamical properties of hard combinatorial landscapes. We apply our approach to prototypical problems such as the Quadratic Assignment Problem, the NK model of rugged landscapes, and the Permutation Flow-shop Scheduling Problem. We show that some network metrics can differentiate problem classes, correlate with problem non-linearity, and predict problem hardness as measured from the performances of trajectory-based local search heuristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

General Introduction These three chapters, while fairly independent from each other, study economic situations in incomplete contract settings. They are the product of both the academic freedom my advisors granted me, and in this sense reflect my personal interests, and of their interested feedback. The content of each chapter can be summarized as follows: Chapter 1: Inefficient durable-goods monopolies In this chapter we study the efficiency of an infinite-horizon durable-goods monopoly model with a fmite number of buyers. We find that, while all pure-strategy Markov Perfect Equilibria (MPE) are efficient, there also exist previously unstudied inefficient MPE where high valuation buyers randomize their purchase decision while trying to benefit from low prices which are offered once a critical mass has purchased. Real time delay, an unusual monopoly distortion, is the result of this attrition behavior. We conclude that neither technological constraints nor concern for reputation are necessary to explain inefficiency in monopolized durable-goods markets. Chapter 2: Downstream mergers and producer's capacity choice: why bake a larger pie when getting a smaller slice? In this chapter we study the effect of downstream horizontal mergers on the upstream producer's capacity choice. Contrary to conventional wisdom, we find anon-monotonic relationship: horizontal mergers induce a higher upstream capacity if the cost of capacity is low, and a lower upstream capacity if this cost is high. We explain this result by decomposing the total effect into two competing effects: a change in hold-up and a change in bargaining erosion. Chapter 3: Contract bargaining with multiple agents In this chapter we study a bargaining game between a principal and N agents when the utility of each agent depends on all agents' trades with the principal. We show, using the Potential, that equilibria payoffs coincide with the Shapley value of the underlying coalitional game with an appropriately defined characteristic function, which under common assumptions coincides with the principal's equilibrium profit in the offer game. Since the problem accounts for differences in information and agents' conjectures, the outcome can be either efficient (e.g. public contracting) or inefficient (e.g. passive beliefs).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hydrological and biogeochemical processes that operate in catchments influence the ecological quality of freshwater systems through delivery of fine sediment, nutrients and organic matter. Most models that seek to characterise the delivery of diffuse pollutants from land to water are reductionist. The multitude of processes that are parameterised in such models to ensure generic applicability make them complex and difficult to test on available data. Here, we outline an alternative - data-driven - inverse approach. We apply SCIMAP, a parsimonious risk based model that has an explicit treatment of hydrological connectivity. we take a Bayesian approach to the inverse problem of determining the risk that must be assigned to different land uses in a catchment in order to explain the spatial patterns of measured in-stream nutrient concentrations. We apply the model to identify the key sources of nitrogen (N) and phosphorus (P) diffuse pollution risk in eleven UK catchments covering a range of landscapes. The model results show that: 1) some land use generates a consistently high or low risk of diffuse nutrient pollution; but 2) the risks associated with different land uses vary both between catchments and between nutrients; and 3) that the dominant sources of P and N risk in the catchment are often a function of the spatial configuration of land uses. Taken on a case-by-case basis, this type of inverse approach may be used to help prioritise the focus of interventions to reduce diffuse pollution risk for freshwater ecosystems. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé L'eau est souvent considérée comme une substance ordinaire puisque elle est très commune dans la nature. En fait elle est la plus remarquable de toutes les substances. Sans l'eau la vie sur la terre n'existerait pas. L'eau représente le composant majeur de la cellule vivante, formant typiquement 70 à 95% de la masse cellulaire et elle fournit un environnement à d'innombrables organismes puisque elle couvre 75% de la surface de terre. L'eau est une molécule simple faite de deux atomes d'hydrogène et un atome d'oxygène. Sa petite taille semble en contradiction avec la subtilité de ses propriétés physiques et chimiques. Parmi celles-là, le fait que, au point triple, l'eau liquide est plus dense que la glace est particulièrement remarquable. Malgré son importance particulière dans les sciences de la vie, l'eau est systématiquement éliminée des spécimens biologiques examinés par la microscopie électronique. La raison en est que le haut vide du microscope électronique exige que le spécimen biologique soit solide. Pendant 50 ans la science de la microscopie électronique a adressé ce problème résultant en ce moment en des nombreuses techniques de préparation dont l'usage est courrant. Typiquement ces techniques consistent à fixer l'échantillon (chimiquement ou par congélation), remplacer son contenu d'eau par un plastique doux qui est transformé à un bloc rigide par polymérisation. Le bloc du spécimen est coupé en sections minces (d’environ 50 nm) avec un ultramicrotome à température ambiante. En général, ces techniques introduisent plusieurs artefacts, principalement dû à l'enlèvement d'eau. Afin d'éviter ces artefacts, le spécimen peut être congelé, coupé et observé à basse température. Cependant, l'eau liquide cristallise lors de la congélation, résultant en une importante détérioration. Idéalement, l'eau liquide est solidifiée dans un état vitreux. La vitrification consiste à refroidir l'eau si rapidement que les cristaux de glace n'ont pas de temps de se former. Une percée a eu lieu quand la vitrification d'eau pure a été découverte expérimentalement. Cette découverte a ouvert la voie à la cryo-microscopie des suspensions biologiques en film mince vitrifié. Nous avons travaillé pour étendre la technique aux spécimens épais. Pour ce faire les échantillons biologiques doivent être vitrifiés, cryo-coupées en sections vitreuse et observées dans une cryo-microscope électronique. Cette technique, appelée la cryo- microscopie électronique des sections vitrifiées (CEMOVIS), est maintenant considérée comme étant la meilleure façon de conserver l'ultrastructure de tissus et cellules biologiques dans un état très proche de l'état natif. Récemment, cette technique est devenue une méthode pratique fournissant des résultats excellents. Elle a cependant, des limitations importantes, la plus importante d'entre elles est certainement dû aux artefacts de la coupe. Ces artefacts sont la conséquence de la nature du matériel vitreux et le fait que les sections vitreuses ne peuvent pas flotter sur un liquide comme c'est le cas pour les sections en plastique coupées à température ambiante. Le but de ce travail a été d'améliorer notre compréhension du processus de la coupe et des artefacts de la coupe. Nous avons ainsi trouvé des conditions optimales pour minimiser ou empêcher ces artefacts. Un modèle amélioré du processus de coupe et une redéfinitions des artefacts de coupe sont proposés. Les résultats obtenus sous ces conditions sont présentés et comparés aux résultats obtenus avec les méthodes conventionnelles. Abstract Water is often considered to be an ordinary substance since it is transparent, odourless, tasteless and it is very common in nature. As a matter of fact it can be argued that it is the most remarkable of all substances. Without water life on Earth would not exist. Water is the major component of cells, typically forming 70 to 95% of cellular mass and it provides an environment for innumerable organisms to live in, since it covers 75% of Earth surface. Water is a simple molecule made of two hydrogen atoms and one oxygen atom, H2O. The small size of the molecule stands in contrast with its unique physical and chemical properties. Among those the fact that, at the triple point, liquid water is denser than ice is especially remarkable. Despite its special importance in life science, water is systematically removed from biological specimens investigated by electron microscopy. This is because the high vacuum of the electron microscope requires that the biological specimen is observed in dry conditions. For 50 years the science of electron microscopy has addressed this problem resulting in numerous preparation techniques, presently in routine use. Typically these techniques consist in fixing the sample (chemically or by freezing), replacing its water by plastic which is transformed into rigid block by polymerisation. The block is then cut into thin sections (c. 50 nm) with an ultra-microtome at room temperature. Usually, these techniques introduce several artefacts, most of them due to water removal. In order to avoid these artefacts, the specimen can be frozen, cut and observed at low temperature. However, liquid water crystallizes into ice upon freezing, thus causing severe damage. Ideally, liquid water is solidified into a vitreous state. Vitrification consists in solidifying water so rapidly that ice crystals have no time to form. A breakthrough took place when vitrification of pure water was discovered. Since this discovery, the thin film vitrification method is used with success for the observation of biological suspensions of. small particles. Our work was to extend the method to bulk biological samples that have to be vitrified, cryosectioned into vitreous sections and observed in cryo-electron microscope. This technique is called cryo-electron microscopy of vitreous sections (CEMOVIS). It is now believed to be the best way to preserve the ultrastructure of biological tissues and cells very close to the native state for electron microscopic observation. Since recently, CEMOVIS has become a practical method achieving excellent results. It has, however, some sever limitations, the most important of them certainly being due to cutting artefacts. They are the consequence of the nature of vitreous material and the fact that vitreous sections cannot be floated on a liquid as is the case for plastic sections cut at room temperature. The aim of the present work has been to improve our understanding of the cutting process and of cutting artefacts, thus finding optimal conditions to minimise or prevent these artefacts. An improved model of the cutting process and redefinitions of cutting artefacts are proposed. Results obtained with CEMOVIS under these conditions are presented and compared with results obtained with conventional methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sleep is a complex behavior both in its manifestation and regulation, that is common to almost all animal species studied thus far. Sleep is not a unitary behavior and has many different aspects, each of which is tightly regulated and influenced by both genetic and environmental factors. Despite its essential role for performance, health, and well-being, genetic mechanisms underlying this complex behavior remain poorly understood. One important aspect of sleep concerns its homeostatic regulation, which ensures that levels of sleep need are kept within a range still allowing optimal functioning during wakefulness. Uncovering the genetic pathways underlying the homeostatic aspect of sleep is of particular importance because it could lead to insights concerning sleep's still elusive function and is therefore a main focus of current sleep research. In this chapter, we first give a definition of sleep homeostasis and describe the molecular genetics techniques that are used to examine it. We then provide a conceptual discussion on the problem of assessing a sleep homeostatic phenotype in various animal models. We finally highlight some of the studies with a focus on clock genes and adenosine signaling molecules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The number of qualitative research methods has grown substantially over the last twenty years, both in social sciences and, more recently, in the health sciences. This growth came with questions on the quality criteria needed to evaluate this work, and numerous guidelines were published. The latters include many discrepancies though, both in their vocabulary and construction. Many expert evaluators decry the absence of consensual and reliable evaluation tools. The authors present the results of an evaluation of 58 existing guidelines in 4 major health science fields (medicine and epidemiology; nursing and health education; social sciences and public health; psychology / psychiatry, research methods and organization) by expert users (article reviewers, experts allocating funds, editors, etc.). The results propose a toolbox containing 12 consensual criteria with the definitions given by expert users. They also indicate in which disciplinary field each type of criteria is known to be more or less essential. Nevertheless, the authors highlight the limitations of the criteria comparability, as soon as one focuses on their specific definitions. They conclude that each criterion in the toolbox must be explained to come to broader consensus and identify definitions that are consensual to all the fields examined and easily operational.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prescribing inappropriate medication (PIM) is a common public health problem. Mainly due to associated adverse drugs events (ADE), it results in major morbidity and mortality, as well as increased healthcare utilization. For a long time, the systematic review of medications prescribed appeared as a solution for limiting PIM and the ADE associated with such prescriptions. With this aim and since 2008, the list of STOPP-START criteria has appeared as attractive in its design, as well as logical and easy to use. The initial version has just been updated and improved. After having detailed all improvements provided to the 2008 version, we present the result of its adaptation into French language by a group of French-speaking expert from Belgium, Canada, France, and Switzerland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sleep problems among detainees are common. Appropriate evaluation and treatment remain challenging in correctional settings. However, this is not primarily a problem of resources; rather, it is, to a great extent, an issue of adequate training. Correctional health professionals need appropriate education regarding insomnia evaluation and management. Guidelines should be based on the principle of equivalence of care and should take into account all evidence from research in the community and in correctional settings. Educational material from outside prisons exists and should be made available to detainees and health professionals (Falloon et al., 2011; Sateia & Nowell, 2004). Priority should be given to changes in prison conditions and to nonpharmacological treatment. There is no evidence-based justification to replace BZD prescriptions with antipsychotics or antidepressants. In correctional settings, prescriptions of antipsychotics and antidepressants for sleep problems can increase risk due to polypharmacy and higher suicide risks. Correctional physicians should monitor and document the evaluation and treatment practice concerning insomnia complaints to improve safe, evidence-based treatment.