939 resultados para One Over Many Argument
Resumo:
Protein-coding genes evolve at different rates, and the influence of different parameters, from gene size to expression level, has been extensively studied. While in yeast gene expression level is the major causal factor of gene evolutionary rate, the situation is more complex in animals. Here we investigate these relations further, especially taking in account gene expression in different organs as well as indirect correlations between parameters. We used RNA-seq data from two large datasets, covering 22 mouse tissues and 27 human tissues. Over all tissues, evolutionary rate only correlates weakly with levels and breadth of expression. The strongest explanatory factors of purifying selection are GC content, expression in many developmental stages, and expression in brain tissues. While the main component of evolutionary rate is purifying selection, we also find tissue-specific patterns for sites under neutral evolution and for positive selection. We observe fast evolution of genes expressed in testis, but also in other tissues, notably liver, which are explained by weak purifying selection rather than by positive selection.
Resumo:
Tavaratoimittajien valintamenettely ja heidän toimintansa seuraaminen, arvioiminen ja mittaaminen on viime vuosina kasvanut nopeasti. Mutta moniko yritys tietää omien tavaratoimittajien tuottaman lisäarvon ? Ja miten tavaratoimittajien lisä-arvoa yrityksessä voidaan arvioida ja mitata, ja missä laajuudessa se on relevanttia ? Näihin ja eräisiin muihin asiaan liittyviin ongelmiin antaa tämä tutkielma työ-kalun ja vastauksen. Tutkielman teoreettisessa osassa esitellään sekä tavaratoimittajien lisäarvon syntyä että eräitä mahdollisia lisäarvon arviointi ja mittaamismenetelmiä tavara-toimittajien arvioimisen ja mittaamisen näkökulmasta. Empiirisessä osassa tutkimusongelmaan esitettään eräs vaihtoehtoinen ratkaisutapa case yrityksen kautta. Tutkielman perusteella voitaneen sanoa, että tavaratoimittajien lisäarvon arvioiminen ja mittaaminen, erityisesti tilanteessa, missä mietitään tavaratoimittajien kokonaismäärää ja heistä koituvien kustannusten leikkaamista, on panostamisen arvoinen työ.
Resumo:
Avec plus de 100000 transplantations d'organes solides (TOS) par année dans le monde, la transplantation d'organes reste actuellement l'un des meilleurs traitements disponibles pour de nombreuses maladies en phase terminale. Bien que les médicaments immunosuppresseurs couramment utilisés soient efficaces dans le contrôle de la réponse immune engendrant le rejet aigu d'une greffe, la survie du greffon à long terme ainsi que la présence d'effets secondaires indésirables restent un enjeu considérable en clinique. C'est pourquoi il est nécessaire de trouver de nouvelles approches thérapeutiques innovantes permettant de contrôler la réponse immunitaire et ainsi d'améliorer les résultats à long terme. L'utilisation des lymphocytes T régulateurs (Treg), suppresseurs naturels de la réponse inflammatoire, a fait l'objet de nombreuses études ces dix dernières années, et pourrait être considérée comme un moyen intéressant d'améliorer la tolérance immunologique de la greffe. Cependant, l'un des obstacles de l'utilisation des Treg comme agent thérapeutique est leur nombre insuffisant non seulement en conditions normales, mais en particulier lors d'une forte réponse immune avec expansion de cellules immunitaires alloréactives. En raison des limitations techniques connues pour l'induction des Treg ex-vivo ou in vitro, nous avons dédié la première partie du travail de thèse à la détermination de l'efficacité de l'induction des Treg in vivo grâce à l'utilisation d'un complexe protéique IL-2/JES6-1 (IL2c). Nous avons montré que l'expansion des Treg par IL2c permettait d'augmenter la survie du greffon sur un modèle murin de transplantation de peau avec mismatch entre le donneur et le receveur pour le complexe majeur d'histocompatibilité (CMH). De plus, nous avons vu qu'en combinant IL2c à une inhibition à court terme de la voie de co-stimulation CD40L-CD40 (anti-CD154/MRl, administré au moment de la transplantation) pour empêcher l'activation des lymphocytes T, il est possible d'induire une tolérance robuste à long terme. Finalement, nos résultats soulignent l'importance de cibler une voie de co-stimulation bien particulière. En effet, l'utilisation d'IL2c combinée au blocage de la co-stimulation CD28-B7.1/2 (CTLA-4 Ig) n'induit qu'une faible prolongation de la survie de la greffe et n'induit pas de tolérance. L'application chez l'humain des traitements induisant la tolérance dans des modèles expérimentaux murins ou de primates n'a malheureusement pas montré de résultats probants en recherche clinique ; une des principales raisons étant la présence de lymphocytes B et T mémoires provenant du systeme d immunité acquise. C est pourquoi nous avons testé si la combinaison d'IL2c et MR1 améliorait la survie de la greffe dans des souris pré¬sensibilisées. Nous avons trouvé qu'en présence de lymphocytes B et T mémoires alloréactifs, l'utilisation d'IL2c et MR1 permettait une amélioration de la survie de la greffe de peau des souris immunocompétentes mais comparé aux souris receveuses naïves, aucune tolérance n'a pu être induite. Toutefois, l'ajout d'un traitement anti-LFA-1 (permettant de bloquer la circulation des lymphocytes T activées) a permis d'améliorer de manière significative la survie de la greffe. Cependant, le rejet chronique, dû à la présence de lymphocytes B activés/mémoires et la production d'anticorps donneur-spécifiques, n'a pas pu être évité. Cibler l'activation des lymphocytes T est la stratégie immunothérapeutique prépondérente après une TOS. C'est pourquoi dans la deuxième partie de cette thèse nous nous sommes intéressés au système de signalisation d'un récepteur des lymphocytes T qui dépend de la paracaspase Malti en tant que nouvelle stratégie immunosuppressive pour le contrôle des lymphocytes T alloréactifs. Nous avons montré que bien que l'inhibition de la signalisation du lymphocyte T en aval de Malti induise une tolérance envers un greffon de peau avec incompatibilités antigéniques mineures, cela ne permet cependant qu'une régulation partielle de l'alloréponse contre des antigènes du CMH. Nous nous sommes aussi intéressés spécifiquement à l'activité protéolytique de Malti. L'inhibition constitutive de l'activité protéolytique de Malti chez les souris Malti-ki s'est révélée délétère pour l'induction de la tolérance car elle diminue la fonction des Treg et augmente l'alloréactivité des cellules Thl. Cependant, lors de l'utilisation d'un inhibiteur peptidique de l'activité protéase de Malti in vitro, il a été possible d'observer une atténuation de l'alloéactivité des lymphocytes T ainsi qu'un maintien de la population des Treg existants. Ces résultats nous laissent penser que des études plus poussées sur le rôle de la signalisation médiée par Malti seraient à envisager dans le domaine de la transplantation. En résumé, les résultats obtenus durant cette thèse nous ont permis d'élucider certains mécanismes immunologiques propres à de nouvelles stratégies thérapeutiques potentielles dont le but est d'induire une tolérance lors de TOS. De plus, ces résultats nous ont permis de souligner l'importance d'utiliser des modèles davantage physiologiques contenant, notamment en tenant compte des lymphocytes B et T mémoires alloréactifs. -- Organ transplantation remains the best available treatment for many forms of end-stage organ diseases, with over 100,000 solid organ transplantations (SOT) occurring worldwide eveiy year. Although the available immunosuppressive (IS) drugs are efficient in controlling acute immune activation and graft rejection, the off-target side effects as well as long-term graft and patient survival remain a challenge in the clinic. Hence, innovative therapeutic approaches are needed to improve long-term outcome across immunological barriers. Based on extensive experimental data obtained over the last decade, it is tempting to consider immunotherapy using Treg; the natural suppressors of overt inflammatory responses, in promoting transplantation tolerance. The first hurdle for the therapeutic use of Treg is their insufficient numbers in non- manipulated individuals, in particular when facing strong immune activation and expanding alloreactive effector cells. Because of the limitations associated with current protocols aiming at ex-vivo expansion or in vitro induction of Treg, the aim of the first part of this thesis was to determine the efficacy of direct in vivo expansion of Treg using the IL-2/JES6- 1 immune complex (IL2c). We found that whilst IL2c mediated Treg expansion alone allowed the prolonged graft survival of fìlli MHC-mismatched skin grafts, its combination with short-term CD40L-CD40 co-stimulation blockade (anti-CD 154/MR1) to inhibit T cell activation administered at the time of transplantation was able to achieve long-term robust tolerance. This study also highlighted the importance of combining Treg based therapies with the appropriate co-stimulation blockade as a combination of IL2c and CD28-B7.1/2 co- stimulation blockade (CTLA-4 Ig) only resulted in slight prolongation of graft survival but not tolerance. The translation of tolerance induction therapies modelled in rodents into non-human primates or into clinical trials has seldom been successful. One main reason being the presence of pre-existing memory T- and B-cells due to acquired immunity in humans versus laboratory animals. Hence, we tested whether IL2c+MRl could promote graft survival in pre-sensitized mice. We found that in the presence of alloreactive memory T- and B-cells, IL2c+MRl combination therapy could prolong MHC-mismatched skin graft survival in immunocompetent mice but tolerance was lost compared to the naïve recipients. The addition of anti-LF A-1 treatment, which prevents the trafficking of memory T cells worked synergistically to significantly further enhance graft survival. However, late rejection mediated by activated/memory B cells and persistent donor-specific alloantibodies still occurred. Immunotherapeutic strategies targeting the activation of T cells are the cornerstone in the current immunosuppressive management after SOT. Therefore, in the next part of this thesis we investigated the paracaspase Malti-dependent T-cell receptor signalling as a novel immunosuppressive strategy to control alloreactive T cells in transplantation. We observed that although the inhibition of Malti downstream T signalling lead to tolerance of a minor H- mismatch skin grafts, it was however not sufficient to regulate alloresponses against MHC mismatches and only prolonged graft survival. Furthermore, we investigated the potential of more selectively targeting the protease activity of Malti. Constitutive inhibition of Malti protease activity in Malti-ki mice was detrimental to tolerance induction as it diminished Treg function and increased Thl alloreactivity. However, when using a small peptide inhibitor of Malti proteolytic activity in vitro, we observed an attenuation of alloreactive T cells and sparing of the pre-existing Treg pool. This indicates that further investigation of the role of Malti signalling in the field of transplantation is required. Collectively, the findings of this thesis provide immunological mechanisms underlying novel therapeutic strategies for the promotion of tolerance in SOT. Moreover, we highlight the importance of testing tolerance induction therapies in more physiological models with pre-existing alloreactive memory T and B cells.
Resumo:
Kansainvälinen ja kiristyvä kilpailu on muuttanut yritysten liiketoimintaympäristöä yhä monimutkaisemmaksi ja riskialttiimmaksi. Yritysten keskittyessä yhä syvemmin omaan ydinosaamiseensa on ulkoisten resurssien hallinnan merkitys korostunut. Työnjakoa syventämällä tavoitellaan parempaa joustavuutta, ajanhallintaa sekä kustannustehokkuutta. Myös hankintatoimen tehtävät ja vastuualueet muuttuvat ennakoivampaan ja riskialttiimpaan suuntaan. Modernin hankintatoimen on kyettävä tunnistamaan, analysoimaan ja hallitsemaan riskejä yhä pirstaloituneemmista lähteistä ja muodoista. Proaktiivinen hankintatoimi osallistuu yrityksen strategiseen suunnitteluun ja riskienhallintaan. Hankintariskit voidaan luokitella seuraaviin kymmeneen riskiluokkaan: keskeytymisriskit, saatavuusriskit, hintariskit, varasto- ja aikatauluriskit, teknologiariskit, luottamuksellisen tiedon vuotoriskit, laaturiskit, konfiguraatioriskit, opportunismiriskit sekä riippuvuusriskit. Tässä tutkimuksessa hankintariskienhallintaa tarkastellaan hankintastrategian valinnan näkökulmasta. Hankintastrategian yksi elementti on toimittajasuhteen ja toimittajien lukumäärän valinta. Tilanteesta riippuen yhteistyöstrategialla tai perinteisellä kilpailuttamisella voidaan tavoitella parempaa riskienhallintaa. Yhteistyöstrategioita ovat eri tasoiset kumppanuussuhteet ja liittoutumat, jolloin suhde toimittajaan on syvä ja luottamuksellinen hyödyttäen aidosti molempia osapuolia. Riskienhallinnan näkökulmasta yhteistyöstrategia soveltuu parhaiten tilanteissa, joissa toimittajista on pitkä kokemus sekä hankintanimikkeen tuottama arvo on merkittävä. Kilpailuttamisstrategia eli hankintojen toteuttaminen usealta toimittajalta vaatii ostavalta organisaatiolta tehokkaasti hyödynnettynä suurempia resursseja kuin yhteistyöstrategian käyttö. Kilpailuttaminen soveltuu usein parhaiten silloin, kun hankintanimikkeet ovat tavanomaisia ja vaihtoehtoisia hankintalähteitä on runsaasti. Lisäksi usean toimittajan käyttö suojaa materiaalivirran katkoksilta sekä lisää hankintamarkkinoiden tuntemusta. Empiirisessä tutkimusosassa tutkitaan, miten soveltuviksi riskienhallintamenetelmiksi erilaiset hankintastrategiat koetaan Suomen IVD-teollisuudessa. Lisäksi tunnistetaan Suomen IVD-teollisuudessa merkittävimmiksi koetut hankintariskiluokat sekä selvitetään millä organisaation tasolla hankintariskienhallinta pääasiassa suoritetaan. Lisäksi selvitetään, millä menetelmin hankintariskejä pääasiassa analysoidaan.
Resumo:
"Live High-Train Low" (LHTL) training can alter oxidative status of athletes. This study compared prooxidant/antioxidant balance responses following two LHTL protocols of the same duration and at the same living altitude of 2250 m in either normobaric (NH) or hypobaric (HH) hypoxia. Twenty-four well-trained triathletes underwent the following two 18-day LHTL protocols in a cross-over and randomized manner: Living altitude (PIO2 = 111.9 ± 0.6 vs. 111.6 ± 0.6 mmHg in NH and HH, respectively); training "natural" altitude (~1000-1100 m) and training loads were precisely matched between both LHTL protocols. Plasma levels of oxidative stress [advanced oxidation protein products (AOPP) and nitrotyrosine] and antioxidant markers [ferric-reducing antioxidant power (FRAP), superoxide dismutase (SOD) and catalase], NO metabolism end-products (NOx) and uric acid (UA) were determined before (Pre) and after (Post) the LHTL. Cumulative hypoxic exposure was lower during the NH (229 ± 6 hrs.) compared to the HH (310 ± 4 hrs.; P<0.01) protocol. Following the LHTL, the concentration of AOPP decreased (-27%; P<0.01) and nitrotyrosine increased (+67%; P<0.05) in HH only. FRAP was decreased (-27%; P<0.05) after the NH while was SOD and UA were only increased following the HH (SOD: +54%; P<0.01 and UA: +15%; P<0.01). Catalase activity was increased in the NH only (+20%; P<0.05). These data suggest that 18-days of LHTL performed in either NH or HH differentially affect oxidative status of athletes. Higher oxidative stress levels following the HH LHTL might be explained by the higher overall hypoxic dose and different physiological responses between the NH and HH.
Resumo:
From 6 to 8 November 1982 one of the most catastrophic flash-flood events was recorded in the Eastern Pyrenees affecting Andorra and also France and Spain with rainfall accumulations exceeding 400 mm in 24 h, 44 fatalities and widespread damage. This paper aims to exhaustively document this heavy precipitation event and examines mesoscale simulations performed by the French Meso-NH non-hydrostatic atmospheric model. Large-scale simulations show the slow-evolving synoptic environment favourable for the development of a deep Atlantic cyclone which induced a strong southerly flow over the Eastern Pyrenees. From the evolution of the synoptic pattern four distinct phases have been identified during the event. The mesoscale analysis presents the second and the third phase as the most intense in terms of rainfall accumulations and highlights the interaction of the moist and conditionally unstable flows with the mountains. The presence of a SW low level jet (30 m s-1) around 1500 m also had a crucial role on focusing the precipitation over the exposed south slopes of the Eastern Pyrenees. Backward trajectories based on Eulerian on-line passive tracers indicate that the orographic uplift was the main forcing mechanism which triggered and maintained the precipitating systems more than 30 h over the Pyrenees. The moisture of the feeding flow mainly came from the Atlantic Ocean (7-9 g kg-1) and the role of the Mediterranean as a local moisture source was very limited (2-3 g kg-1) due to the high initial water vapour content of the parcels and the rapid passage over the basin along the Spanish Mediterranean coast (less than 12 h).
Resumo:
Substances emitted into the atmosphere by human activities in urban and industrial areas cause environmental problems such as air quality degradation, respiratory diseases, climate change, global warming, and stratospheric ozone depletion. Volatile organic compounds (VOCs) are major air pollutants, emitted largely by industry, transportation and households. Many VOCs are toxic, and some are considered to be carcinogenic, mutagenic, or teratogenic. A wide spectrum of VOCs is readily oxidized photocatalytically. Photocatalytic oxidation (PCO) over titanium dioxide may present a potential alternative to air treatment strategies currently in use, such as adsorption and thermal treatment, due to its advantageous activity under ambient conditions, although higher but still mild temperatures may also be applied. The objective of the present research was to disclose routes of chemical reactions, estimate the kinetics and the sensitivity of gas-phase PCO to reaction conditions in respect of air pollutants containing heteroatoms in their molecules. Deactivation of the photocatalyst and restoration of its activity was also taken under consideration to assess the practical possibility of the application of PCO to the treatment of air polluted with VOCs. UV-irradiated titanium dioxide was selected as a photocatalyst for its chemical inertness, non-toxic character and low cost. In the present work Degussa P25 TiO2 photocatalyst was mostly used. In transient studies platinized TiO2 was also studied. The experimental research into PCO of following VOCs was undertaken: - methyl tert-butyl ether (MTBE) as the basic oxygenated motor fuel additive and, thus, a major non-biodegradable pollutant of groundwater; - tert-butyl alcohol (TBA) as the primary product of MTBE hydrolysis and PCO; - ethyl mercaptan (ethanethiol) as one of the reduced sulphur pungent air pollutants in the pulp-and-paper industry; - methylamine (MA) and dimethylamine (DMA) as the amino compounds often emitted by various industries. The PCO of VOCs was studied using a continuous-flow mode. The PCO of MTBE and TBA was also studied by transient mode, in which carbon dioxide, water, and acetone were identified as the main gas-phase products. The volatile products of thermal catalytic oxidation (TCO) of MTBE included 2-methyl-1-propene (2-MP), carbon monoxide, carbon dioxide and water; TBA decomposed to 2-MP and water. Continuous PCO of 4 TBA proceeded faster in humid air than dry air. MTBE oxidation, however, was less sensitive to humidity. The TiO2 catalyst was stable during continuous PCO of MTBE and TBA above 373 K, but gradually lost activity below 373 K; the catalyst could be regenerated by UV irradiation in the absence of gas-phase VOCs. Sulphur dioxide, carbon monoxide, carbon dioxide and water were identified as ultimate products of PCO of ethanethiol. Acetic acid was identified as a photocatalytic oxidation by-product. The limits of ethanethiol concentration and temperature, at which the reactor performance was stable for indefinite time, were established. The apparent reaction kinetics appeared to be independent of the reaction temperature within the studied limits, 373 to 453 K. The catalyst was completely and irreversibly deactivated with ethanethiol TCO. Volatile PCO products of MA included ammonia, nitrogen dioxide, nitrous oxide, carbon dioxide and water. Formamide was observed among DMA PCO products together with others similar to the ones of MA. TCO for both substances resulted in the formation of ammonia, hydrogen cyanide, carbon monoxide, carbon dioxide and water. No deactivation of the photocatalyst during the multiple long-run experiments was observed at the concentrations and temperatures used in the study. PCO of MA was also studied in the aqueous phase. Maximum efficiency was achieved in an alkaline media, where MA exhibited high fugitivity. Two mechanisms of aqueous PCO – decomposition to formate and ammonia, and oxidation of organic nitrogen directly to nitrite - lead ultimately to carbon dioxide, water, ammonia and nitrate: formate and nitrite were observed as intermediates. A part of the ammonia formed in the reaction was oxidized to nitrite and nitrate. This finding helped in better understanding of the gasphase PCO pathways. The PCO kinetic data for VOCs fitted well to the monomolecular Langmuir- Hinshelwood (L-H) model, whereas TCO kinetic behaviour matched the first order process for volatile amines and the L-H model for others. It should be noted that both LH and the first order equations were only the data fit, not the real description of the reaction kinetics. The dependence of the kinetic constants on temperature was established in the form of an Arrhenius equation.
Resumo:
Mountain regions worldwide are particularly sensitive to on-going climate change. Specifically in the Alps in Switzerland, the temperature has increased twice as fast than in the rest of the Northern hemisphere. Water temperature closely follows the annual air temperature cycle, severely impacting streams and freshwater ecosystems. In the last 20 years, brown trout (Salmo trutta L) catch has declined by approximately 40-50% in many rivers in Switzerland. Increasing water temperature has been suggested as one of the most likely cause of this decline. Temperature has a direct effect on trout population dynamics through developmental and disease control but can also indirectly impact dynamics via food-web interactions such as resource availability. We developed a spatially explicit modelling framework that allows spatial and temporal projections of trout biomass using the Aare river catchment as a model system, in order to assess the spatial and seasonal patterns of trout biomass variation. Given that biomass has a seasonal variation depending on trout life history stage, we developed seasonal biomass variation models for three periods of the year (Autumn-Winter, Spring and Summer). Because stream water temperature is a critical parameter for brown trout development, we first calibrated a model to predict water temperature as a function of air temperature to be able to further apply climate change scenarios. We then built a model of trout biomass variation by linking water temperature to trout biomass measurements collected by electro-fishing in 21 stations from 2009 to 2011. The different modelling components of our framework had overall a good predictive ability and we could show a seasonal effect of water temperature affecting trout biomass variation. Our statistical framework uses a minimum set of input variables that make it easily transferable to other study areas or fish species but could be improved by including effects of the biotic environment and the evolution of demographical parameters over time. However, our framework still remains informative to spatially highlight where potential changes of water temperature could affect trout biomass. (C) 2015 Elsevier B.V. All rights reserved.-
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
This article introduces EsPal: a Web-accessible repository containing a comprehensive set of properties of Spanish words. EsPal is based on an extensible set of data sources, beginning with a 300 million token written database and a 460 million token subtitle database. Properties available include word frequency, orthographic structure and neighborhoods, phonological structure and neighborhoods, and subjective ratings such as imageability. Subword structure properties are also available in terms of bigrams and trigrams, bi-phones, and bi-syllables. Lemma and part-of-speech information and their corresponding frequencies are also indexed. The website enables users to either upload a set of words to receive their properties, or to receive a set of words matching constraints on the properties. The properties themselves are easily extensible and will be added over time as they become available. It is freely available from the following website: http://www.bcbl.eu/databases/espal
Resumo:
Raportointi liittyy kiinteänä osana yrityksen jokapäiväiseen toimintaan. Raportoinnin sisältö ja muoto vaihtelevat organisaatiotasosta riippuen päivittäisen toiminnan tarkkailusta kuukausittaiseen tulosraportointiin. Raportointi voidaan toteuttaa operatiivisten järjestelmien kautta tai nykyisin entistä suositumpana vaihtoehtona on keskitetty raportointi. Uuden raportointijärjestelmän hankintaprojekti on usein koko yritystä koskeva investointi. Jos raportointijärjestelmällä on tarkoitus raportoida sekä operatiivista toimintaa että johdon tarpeita, on sen mukauduttava moneen tarkoitukseen. Aluksi on tärkeää määritellä tietotarpeet ja tavoitteet projektille unohtamatta riskien- ja projektinhallintaa sekä investointilaskelmia. Jos raportoidaan myös yrityksen ulkopuolelle, tulee ottaa huomioon mahdolliset säädökset sekä tietoturvallisuusnäkökulmat. Myös yrityksen toimintatapoja ja – prosesseja on syytä tarkastella kriittisesti ennen järjestelmähankintaa jolloin voidaan havaita uusia raportointikohteita, tai toimintatapoja voidaan uudelleen organisoida parhaan toimintatavan saavuttamiseksi. Raportointijärjestelmää hankittaessa turvaudutaan usein ulkopuoliseen ohjelmistotoimittajaan, joka integroi ja räätälöi järjestelmän yrityksen omiin tarpeisiin soveltuvaksi. Raportointijärjestelmän hankintaprojekti ei lopu käyttöönottoon vaan projektin alussa on huomioitava myös järjestelmän huomattavasti pisin elinkaari eli käyttö ja ylläpito. Raportointi-, kuten ei moni muukaan tietojärjestelmä, ole ikinä valmis sillä tarpeet ja toimintatavat muuttuvat ajan kuluessa ja käyttäjien tietoisuus lisääntyy.
Resumo:
BACKGROUND: Evidence regarding the different treatment options of status epilepticus (SE) in adults is scarce. Large randomized trials cover only one treatment at early stage and suggest the superiority of benzodiazepines over placebo, of intravenous lorazepam over intravenous diazepam or over intravenous phenytoin alone, and of intramuscular midazolam over intravenous lorazepam. However, many patients will not be treated successfully with the first treatment step. A large randomized trial covering the treatment of established status (ESETT) has just been funded recently by the NIH and will not start before 2015, with expected results in 2018; a trial on the treatment of refractory status with general anesthetics was terminated early due to insufficient recruitment. Therefore, a prospective multicenter observational registry was set up; this may help in clinical decision-making until results from randomized trials are available. METHODS/DESIGN: SENSE is a prospective, multicenter registry for patients treated for SE. The primary objective is to document patient characteristics, treatment modalities and in-house outcome of consecutive adults admitted for SE treatment in each of the participating centres and to identify predictors of outcome. Pre-treatment, treatment-related and outcome variables are documented systematically. To allow for meaningful multivariate analysis in the patient subgroups with refractory SE, a cohort size of 1000 patients is targeted. DISCUSSION: The results of the study will provide information about risks and benefits of specific treatment steps in different patient groups with SE at different points of time. Thus, it will support clinical decision-making and, furthermore, it will be helpful in the planning of treatment trials. TRIAL REGISTRATION: DRKS00000725.
Resumo:
There is a tendency to overlook the many fields that attracted Salvador Dalí, one of the most controversial figures of the 20th century. The Catalan artist was interested in painting, sculpture, engraving, opera, literature, advertising, dance, and even the theatre of life. Dalí was also a theoretician, constantly examining the processes of creation and knowledge: over the years he developed an imagery which, though changing, remained coherent, giving his body of work an unexpected unity.
Resumo:
The number of qualitative research methods has grown substantially over the last twenty years, both in social sciences and, more recently, in the health sciences. This growth came with questions on the quality criteria needed to evaluate this work, and numerous guidelines were published. The latters include many discrepancies though, both in their vocabulary and construction. Many expert evaluators decry the absence of consensual and reliable evaluation tools. The authors present the results of an evaluation of 58 existing guidelines in 4 major health science fields (medicine and epidemiology; nursing and health education; social sciences and public health; psychology / psychiatry, research methods and organization) by expert users (article reviewers, experts allocating funds, editors, etc.). The results propose a toolbox containing 12 consensual criteria with the definitions given by expert users. They also indicate in which disciplinary field each type of criteria is known to be more or less essential. Nevertheless, the authors highlight the limitations of the criteria comparability, as soon as one focuses on their specific definitions. They conclude that each criterion in the toolbox must be explained to come to broader consensus and identify definitions that are consensual to all the fields examined and easily operational.
Resumo:
BACKGROUND: Habitual walking speed predicts many clinical conditions later in life, but it declines with age. However, which particular exercise intervention can minimize the age-related gait speed loss is unclear. PURPOSE: Our objective was to determine the effects of strength, power, coordination, and multimodal exercise training on healthy old adults' habitual and fast gait speed. METHODS: We performed a computerized systematic literature search in PubMed and Web of Knowledge from January 1984 up to December 2014. Search terms included 'Resistance training', 'power training', 'coordination training', 'multimodal training', and 'gait speed (outcome term). Inclusion criteria were articles available in full text, publication period over past 30 years, human species, journal articles, clinical trials, randomized controlled trials, English as publication language, and subject age ≥65 years. The methodological quality of all eligible intervention studies was assessed using the Physiotherapy Evidence Database (PEDro) scale. We computed weighted average standardized mean differences of the intervention-induced adaptations in gait speed using a random-effects model and tested for overall and individual intervention effects relative to no-exercise controls. RESULTS: A total of 42 studies (mean PEDro score of 5.0 ± 1.2) were included in the analyses (2495 healthy old adults; age 74.2 years [64.4-82.7]; body mass 69.9 ± 4.9 kg, height 1.64 ± 0.05 m, body mass index 26.4 ± 1.9 kg/m(2), and gait speed 1.22 ± 0.18 m/s). The search identified only one power training study, therefore the subsequent analyses focused only on the effects of resistance, coordination, and multimodal training on gait speed. The three types of intervention improved gait speed in the three experimental groups combined (n = 1297) by 0.10 m/s (±0.12) or 8.4 % (±9.7), with a large effect size (ES) of 0.84. Resistance (24 studies; n = 613; 0.11 m/s; 9.3 %; ES: 0.84), coordination (eight studies, n = 198; 0.09 m/s; 7.6 %; ES: 0.76), and multimodal training (19 studies; n = 486; 0.09 m/s; 8.4 %, ES: 0.86) increased gait speed statistically and similarly. CONCLUSIONS: Commonly used exercise interventions can functionally and clinically increase habitual and fast gait speed and help slow the loss of gait speed or delay its onset.