872 resultados para Self-sustainable Successful Models
Resumo:
Sustainable resource use is one of the most important environmental issues of our times. It is closely related to discussions on the 'peaking' of various natural resources serving as energy sources, agricultural nutrients, or metals indispensable in high-technology applications. Although the peaking theory remains controversial, it is commonly recognized that a more sustainable use of resources would alleviate negative environmental impacts related to resource use. In this thesis, sustainable resource use is analysed from a practical standpoint, through several different case studies. Four of these case studies relate to resource metabolism in the Canton of Geneva in Switzerland: the aim was to model the evolution of chosen resource stocks and flows in the coming decades. The studied resources were copper (a bulk metal), phosphorus (a vital agricultural nutrient), and wood (a renewable resource). In addition, the case of lithium (a critical metal) was analysed briefly in a qualitative manner and in an electric mobility perspective. In addition to the Geneva case studies, this thesis includes a case study on the sustainability of space life support systems. Space life support systems are systems whose aim is to provide the crew of a spacecraft with the necessary metabolic consumables over the course of a mission. Sustainability was again analysed from a resource use perspective. In this case study, the functioning of two different types of life support systems, ARES and BIORAT, were evaluated and compared; these systems represent, respectively, physico-chemical and biological life support systems. Space life support systems could in fact be used as a kind of 'laboratory of sustainability' given that they represent closed and relatively simple systems compared to complex and open terrestrial systems such as the Canton of Geneva. The chosen analysis method used in the Geneva case studies was dynamic material flow analysis: dynamic material flow models were constructed for the resources copper, phosphorus, and wood. Besides a baseline scenario, various alternative scenarios (notably involving increased recycling) were also examined. In the case of space life support systems, the methodology of material flow analysis was also employed, but as the data available on the dynamic behaviour of the systems was insufficient, only static simulations could be performed. The results of the case studies in the Canton of Geneva show the following: were resource use to follow population growth, resource consumption would be multiplied by nearly 1.2 by 2030 and by 1.5 by 2080. A complete transition to electric mobility would be expected to only slightly (+5%) increase the copper consumption per capita while the lithium demand in cars would increase 350 fold. For example, phosphorus imports could be decreased by recycling sewage sludge or human urine; however, the health and environmental impacts of these options have yet to be studied. Increasing the wood production in the Canton would not significantly decrease the dependence on wood imports as the Canton's production represents only 5% of total consumption. In the comparison of space life support systems ARES and BIORAT, BIORAT outperforms ARES in resource use but not in energy use. However, as the systems are dimensioned very differently, it remains questionable whether they can be compared outright. In conclusion, the use of dynamic material flow analysis can provide useful information for policy makers and strategic decision-making; however, uncertainty in reference data greatly influences the precision of the results. Space life support systems constitute an extreme case of resource-using systems; nevertheless, it is not clear how their example could be of immediate use to terrestrial systems.
Resumo:
Résumé Métropolisation, morphologie urbaine et développement durable. Transformations urbaines et régulation de l'étalement : le cas de l'agglomération lausannoise. Cette thèse s'inscrit clans la perspective d'une analyse stratégique visant à un définir et à expliciter les liens entre connaissance, expertise et décision politique. L'hypothèse fondamentale qui oriente l'ensemble de ce travail est la suivante : le régime d'urbanisation qui s'est imposé au cours des trente dernières années correspond à une transformation du principe morphogénétique de développement spatial des agglomérations qui tend à alourdir leurs bilans écologiques et à péjorer la qualité du cadre de vie des citadins. Ces enjeux environnementaux liés aux changements urbains et singulièrement ceux de la forme urbaine constituent un thème de plus en plus important dans la recherche de solutions d'aménagement urbain dans une perspective de développement durable. Dans ce contexte, l'aménagement urbain devient un mode d'action et une composante de tout premier ordre des politiques publiques visant un développement durable à l'échelle locale et globale. Ces modalités de développement spatial des agglomérations émergent indiscutablement au coeur de la problématique environnementale. Or si le concept de développement durable nous livre une nouvelle de de lecture des territoires et de ses transformations, en prônant le modèle de la ville compacte et son corollaire la densification, la traduction à donner à ce principe stratégique reste controversée, notamment sous l'angle de l'aménagement du territoire et des stratégies de développement urbain permettant une mise en oeuvre adéquate des solutions proposées. Nous avons ainsi tenté dans ce travail de répondre à un certain nombre de questions : quelle validité accorder au modèle de la ville compacte ? La densification est-elle une réponse adéquate ? Si oui, sous quelles modalités ? Quelles sont, en termes de stratégies d'aménagement, les alternatives durables au modèle de la ville étalée ? Faut-il vraiment densifier ou simplement maîtriser la dispersion ? Notre objectif principal étant in fine de déterminer les orientations et contenus urbanistiques de politiques publiques visant à réguler l'étalement urbain, de valider la faisabilité de ces principes et à définir les conditions de leur mise en place dans le cas d'une agglomération. Pour cela, et après avoir choisi l'agglomération lausannoise comme terrain d'expérimentation, trois approches complémentaires se sont révélées indispensables dans ce travail 1. une approche théorique visant à définir un cadre conceptuel interdisciplinaire d'analyse du phénomène urbain dans ses rapports à la problématique du développement durable liant régime d'urbanisation - forme urbaine - développement durable ; 2. une approche méthodologique proposant des outils d'analyse simples et efficaces de description des nouvelles morphologies urbaines pour une meilleure gestion de l'environnement urbain et de la pratique de l'aménagement urbain ; 3. une approche pragmatique visant à approfondir la réflexion sur la ville étalée en passant d'une approche descriptive des conséquences du nouveau régime d'urbanisation à une approche opérationnelle, visant à identifier les lignes d'actions possibles dans une perspective de développement durable. Cette démarche d'analyse nous a conduits à trois résultats majeurs, nous permettant de définir une stratégie de lutte contre l'étalement. Premièrement, si la densification est acceptée comme un objectif stratégique de l'aménagement urbain, le modèle de la ville dense ne peut être appliqué saris la prise en considération d'autres objectifs d'aménagement. Il ne suffit pas de densifier pour réduire l'empreinte écologique de la ville et améliorer la qualité de vie des citadins. La recherche d'une forme urbaine plus durable est tributaire d'une multiplicité de facteurs et d'effets de synergie et la maîtrise des effets négatifs de l'étalement urbain passe par la mise en oeuvre de politiques urbaines intégrées et concertées, comme par exemple prôner la densification qualifiée comme résultante d'un processus finalisé, intégrer et valoriser les transports collectifs et encore plus la métrique pédestre avec l'aménagement urbain, intégrer systématiquement la diversité à travers les dimensions physique et sociale du territoire. Deuxièmement, l'avenir de ces territoires étalés n'est pas figé. Notre enquête de terrain a montré une évolution des modes d'habitat liée aux modes de vie, à l'organisation du travail, à la mobilité, qui font que l'on peut penser à un retour d'une partie de la population dans les villes centres (fin de la toute puissance du modèle de la maison individuelle). Ainsi, le diagnostic et la recherche de solutions d'aménagement efficaces et viables ne peuvent être dissociés des demandes des habitants et des comportements des acteurs de la production du cadre bâti. Dans cette perspective, tout programme d'urbanisme doit nécessairement s'appuyer sur la connaissance des aspirations de la population. Troisièmement, la réussite de la mise en oeuvre d'une politique globale de maîtrise des effets négatifs de l'étalement urbain est fortement conditionnée par l'adaptation de l'offre immobilière à la demande de nouveaux modèles d'habitat répondant à la fois à la nécessité d'une maîtrise des coûts de l'urbanisation (économiques, sociaux, environnementaux), ainsi qu'aux aspirations émergentes des ménages. Ces résultats nous ont permis de définir les orientations d'une stratégie de lutte contre l'étalement, dont nous avons testé la faisabilité ainsi que les conditions de mise en oeuvre sur le territoire de l'agglomération lausannoise. Abstract This dissertation participates in the perspective of a strategic analysis aiming at specifying the links between knowledge, expertise and political decision, The fundamental hypothesis directing this study assumes that the urban dynamics that has characterized the past thirty years signifies a trans-formation of the morphogenetic principle of agglomerations' spatial development that results in a worsening of their ecological balance and of city dwellers' quality of life. The environmental implications linked to urban changes and particularly to changes in urban form constitute an ever greater share of research into sustainable urban planning solutions. In this context, urban planning becomes a mode of action and an essential component of public policies aiming at local and global sustainable development. These patterns of spatial development indisputably emerge at the heart of environmental issues. If the concept of sustainable development provides us with new understanding into territories and their transformations, by arguing in favor of densification, its concretization remains at issue, especially in terms of urban planning and of urban development strategies allowing the appropriate implementations of the solutions offered. Thus, this study tries to answer a certain number of questions: what validity should be granted to the model of the dense city? Is densification an adequate answer? If so, under what terms? What are the sustainable alternatives to urban sprawl in terms of planning strategies? Should densification really be pursued or should we simply try to master urban sprawl? Our main objective being in fine to determine the directions and urban con-tents of public policies aiming at regulating urban sprawl, to validate the feasibility of these principles and to define the conditions of their implementation in the case of one agglomeration. Once the Lausanne agglomeration had been chosen as experimentation field, three complementary approaches proved to be essential to this study: 1. a theoretical approach aiming at definying an interdisciplinary conceptual framework of the ur-ban phenomenon in its relation to sustainable development linking urban dynamics - urban form - sustainable development ; 2. a methodological approach proposing simple and effective tools for analyzing and describing new urban morphologies for a better management of the urban environment and of urban planning practices 3. a pragmatic approach aiming at deepening reflection on urban sprawl by switching from a descriptive approach of the consequences of the new urban dynamics to an operational approach, aiming at identifying possible avenues of action respecting the principles of sustainable development. This analysis approach provided us with three major results, allowing us to define a strategy to cur-tail urban sprawl. First, if densification is accepted as a strategic objective of urban planning, the model of the dense city can not be applied without taking into consideration other urban planning objectives. Densification does not suffice to reduce the ecological impact of the city and improve the quality of life of its dwellers. The search for a more sustainable urban form depends on a multitude of factors and effects of synergy. Reducing the negative effects of urban sprawl requires the implementation of integrated and concerted urban policies, like for example encouraging densification qualified as resulting from a finalized process, integrating and developing collective forms of transportation and even more so the pedestrian metric with urban planning, integrating diversity on a systematic basis through the physical and social dimensions of the territory. Second, the future of such sprawling territories is not fixed. Our research on the ground revea-led an evolution in the modes of habitat related to ways of life, work organization and mobility that suggest the possibility of the return of a part of the population to the center of cities (end of the rule of the model of the individual home). Thus, the diagnosis and the search for effective and sustainable solutions can not be conceived of independently of the needs of the inhabitants and of the behavior of the actors behind the production of the built territory. In this perspective, any urban program must necessarily be based upon the knowledge of the population's wishes. Third, the successful implementation of a global policy of control of urban sprawl's negative effects is highly influenced by the adaptation of property offer to the demand of new habitat models satisfying both the necessity of urbanization cost controls (economical, social, environ-mental) and people's emerging aspirations. These results allowed us to define a strategy to cur-tail urban sprawl. Its feasibility and conditions of implementation were tested on the territory of the Lausanne agglomeration.
Resumo:
OBJECTIVES. This study examines the relationship between self-perception of aging and vulnerability to adverse outcomes in adults aged 65-70 years using data from a cohort of 1,422 participants in Lausanne, Switzerland. METHODS: A positive or negative score of perception of aging was established using the Attitudes Toward Own Aging subscale including 5 items of the Philadelphia Geriatric Center Morale Scale. Falls, hospitalizations, and difficulties in basic and instrumental activities of daily living (ADL) collected in the first 3 years of follow-up were considered adverse outcomes. The relationship between perception and outcomes were evaluated using multiple logistic regression models adjusting for chronic medical conditions, depressive feelings, living arrangement, and socioeconomic characteristics. RESULTS: The strongest associations of self-perception of aging with outcomes were observed for basic and instrumental ADL. Associations with falls and hospitalizations were not constant but could be explained by health characteristics. CONCLUSIONS: A negative self-perception of aging is an indicator of risk for future disability in ADL. Factors such as a low-economic status, living alone, multiple chronic medical conditions, and depressive feelings contribute to a negative self-perception of aging but do not explain the relationship with incident activities of daily living disability.
Resumo:
The increasing interest aroused by more advanced forecasting techniques, together with the requirement for more accurate forecasts of tourismdemand at the destination level due to the constant growth of world tourism, has lead us to evaluate the forecasting performance of neural modelling relative to that of time seriesmethods at a regional level. Seasonality and volatility are important features of tourism data, which makes it a particularly favourable context in which to compare the forecasting performance of linear models to that of nonlinear alternative approaches. Pre-processed official statistical data of overnight stays and tourist arrivals fromall the different countries of origin to Catalonia from 2001 to 2009 is used in the study. When comparing the forecasting accuracy of the different techniques for different time horizons, autoregressive integrated moving average models outperform self-exciting threshold autoregressions and artificial neural network models, especially for shorter horizons. These results suggest that the there is a trade-off between the degree of pre-processing and the accuracy of the forecasts obtained with neural networks, which are more suitable in the presence of nonlinearity in the data. In spite of the significant differences between countries, which can be explained by different patterns of consumer behaviour,we also find that forecasts of tourist arrivals aremore accurate than forecasts of overnight stays.
Resumo:
The current study aimed to explore the validity of an adaptation into French of the self-rated form of the Health of the Nation Outcome Scales for Children and Adolescents (F-HoNOSCA-SR) and to test its usefulness in a clinical routine use. One hundred and twenty nine patients, admitted into two inpatient units, were asked to participate in the study. One hundred and seven patients filled out the F-HoNOSCA-SR (for a subsample (N=17): at two occasions, one week apart) and the strengths and difficulties questionnaire (SDQ). In addition, the clinician rated the clinician-rated form of the HoNOSCA (HoNOSCA-CR, N=82). The reliability (assessed with split-half coefficient, item response theory (IRT) models and intraclass correlations (ICC) between the two occasions) revealed that the F-HoNSOCA-SR provides reliable measures. The concurrent validity assessed by correlating the F-HoNOSCA-SR and the SDQ revealed a good convergent validity of the instrument. The relationship analyses between the F-HoNOSCA-SR and the HoNOSCA-CR revealed weak but significant correlations. The comparison between the F-HoNOSCA-SR and the HoNOSCA-CR with paired sample t-tests revealed a higher score for the self-rated version. The F-HoNSOCA-SR was reported to provide reliable measures. In addition, it allows us to measure complementary information when used together with the HoNOSCA-CR.
Resumo:
Accurate perception of the temporal order of sensory events is a prerequisite in numerous functions ranging from language comprehension to motor coordination. We investigated the spatio-temporal brain dynamics of auditory temporal order judgment (aTOJ) using electrical neuroimaging analyses of auditory evoked potentials (AEPs) recorded while participants completed a near-threshold task requiring spatial discrimination of left-right and right-left sound sequences. AEPs to sound pairs modulated topographically as a function of aTOJ accuracy over the 39-77ms post-stimulus period, indicating the engagement of distinct configurations of brain networks during early auditory processing stages. Source estimations revealed that accurate and inaccurate performance were linked to bilateral posterior sylvian regions activity (PSR). However, activity within left, but not right, PSR predicted behavioral performance suggesting that left PSR activity during early encoding phases of pairs of auditory spatial stimuli appears critical for the perception of their order of occurrence. Correlation analyses of source estimations further revealed that activity between left and right PSR was significantly correlated in the inaccurate but not accurate condition, indicating that aTOJ accuracy depends on the functional decoupling between homotopic PSR areas. These results support a model of temporal order processing wherein behaviorally relevant temporal information--i.e. a temporal 'stamp'--is extracted within the early stages of cortical processes within left PSR but critically modulated by inputs from right PSR. We discuss our results with regard to current models of temporal of temporal order processing, namely gating and latency mechanisms.
Resumo:
Introduction Our institution (University hospital) is encouraging physical activities for health through various popular sporting events in the city of Lausanne, the biggest of which is a road race of 2, 4, 10 and 20km. Objective To create an efficient and sustainable training program in preparation of the race for a group of motivated hospital employees without any prior experience with structured training and to identifying the benefits and limitations encountered.. Methods Subjects of various fitness levels were recruited by add and agreed to undergo lab and field testing before a 12-week 3 times/week running program, based on maximal aerobic speed (MAS-30/30 sec intervals), running technique exercises and endurance training. The interval session was the only one supervised. Their goal was the 10km (11 subjects) and the 20km (6 subjects). Results A group of 17 subjects (7 male and 10 female), mean age 36.6±7.3 years, VO2max 44.0±5.5 ml/kg/min, filed test interval MAS 15.1±2.4 km/h started the program. 2 were lost because of injury (while skiing). Adherence to interval sessions was excellent, although 3 weekly training sessions proved to be difficult for most of the subjects. Performance in the race was satisfying for all of them, 6/7 subjects having improved their running time from the previous year, the others participated for the first time and 7/8 completed the race satisfyingly, one DNF-ed because of sinusitis. Repeat MAS field test was available for 6 subjects, who improved by 5.9% (p<0.01). Subjectively, all of the participants were very satisfied with improvement, interaction with colleagues from various professions, and with self achievement and confidence. Conclusions Implementation of a structured training program for recreational or non-athletes can be very successful in creating a better self-confidence, a better working environment inside a hospital facility and obviously in improvement of physical fitness and athletic performance. Above all, it can only encourage health institutions to promote the health of their own employees through physical activity, which can allow people to connect through sports. As a result, subjects in this study tend to encourage other employees to be more active and are hungry for more advice and continued offers for physical activities benefiting both them and the institution through better efficiency at work and less absenteeism common to more active people.
Resumo:
The present study tests the relationships between the three frequently used personality models evaluated by the Temperament Character Inventory-Revised (TCI-R), Neuroticism Extraversion Openness Five Factor Inventory – Revised (NEO-FFI-R) and Zuckerman-Kuhlman Personality Questionnaire-50- Cross-Cultural (ZKPQ-50-CC). The results were obtained with a sample of 928 volunteer subjects from the general population aged between 17 and 28 years old. Frequency distributions and alpha reliabilities with the three instruments were acceptable. Correlational and factorial analyses showed that several scales in the three instruments share an appreciable amount of common variance. Five factors emerged from principal components analysis. The first factor was integrated by A (Agreeableness), Co (Cooperativeness) and Agg-Host (Aggressiveness-Hostility), with secondary loadings in C (Conscientiousness) and SD (Self-directiveness) from other factors. The second factor was composed by N (Neuroticism), N-Anx (Neuroticism-Anxiety), HA (Harm Avoidance) and SD (Self-directiveness). The third factor was integrated by Sy (Sociability), E (Extraversion), RD (Reward Dependence), ImpSS (Impulsive Sensation Seeking) and NS (novelty Seeking). The fourth factor was integrated by Ps (Persistence), Act (Activity), and C, whereas the fifth and last factor was composed by O (Openness) and ST (Self- Transcendence). Confirmatory factor analyses indicate that the scales in each model are highly interrelated and define the specified latent dimension well. Similarities and differences between these three instruments are further discussed.
Resumo:
AIMS: Many studies have suggested a close relationship between alcohol use disorder (AUD) and major depressive disorder (MDD). This study aimed to test whether the relationship between self-reported AUD and MDD was artificially strengthened by the diagnosis of MDD. This association was tested comparing relationships between alcohol use and AUD for depressive people and non-depressive people. METHODS: As part of the Cohort Study on Substance Use Risk Factors, 4352 male Swiss alcohol users in their early twenties answered questions concerning their alcohol use, AUD and MDD at two time points. Generalized linear models for cross-sectional and longitudinal associations were calculated. RESULTS: For cross-sectional associations, depressive participants reported a higher number of AUD symptoms (β = 0.743, P < 0.001) than non-depressive participants. Moreover, there was an interaction (β = -0.204, P = 0.001): the relationship between alcohol use and AUD was weaker for depressive participants rather than non-depressive participants. For longitudinal associations, there were almost no significant relationships between MDD at baseline and AUD at follow-up, but the interaction was still significant (β = -0.249, P < 0.001). CONCLUSION: MDD thus appeared to be a confounding variable in the relationship between alcohol use and AUD, and self-reported measures of AUD seemed to be overestimated by depressive people. This result brings into question the accuracy of self-reported measures of substance use disorders. Furthermore, it adds to the emerging debate about the usefulness of substance use disorder as a concept, when heavy substance use itself appears to be a sensitive and reliable indicator.
Resumo:
Colonization is likely to be more successful for species with an ability to self-fertilize and thus to establish new populations as single individuals. As a result, self-compatibility should be common among colonizing species. This idea, labelled 'Baker's law', has been influential in discussions of sexual-system and mating-system evolution. However, its generality has been questioned, because models of the evolution of dispersal and the mating system predict an association between high dispersal rates and outcrossing rather than selfing, and because of many apparent counter examples to the law. The contrasting predictions made by models invoking Baker's law versus those for the evolution of the mating system and dispersal urges a reassessment of how we should view both these traits. Here, I review the literature on the evolution of mating and dispersal in colonizing species, with a focus on conceptual issues. I argue for the importance of distinguishing between the selfing or outcrossing rate and a simple ability to self-fertilize, as well as for the need for a more nuanced consideration of dispersal. Colonizing species will be characterized by different phases in their life pattern: dispersal to new habitat, implying an ecological sieve on dispersal traits; establishment and a phase of growth following colonization, implying a sieve on reproductive traits; and a phase of demographic stasis at high density, during which new trait associations can evolve through local adaptation. This dynamic means that the sorting of mating-system and dispersal traits should change over time, making simple predictions difficult.
Resumo:
Natural Killer (NK) cells use germ line encoded receptors to detect diseased host cells. Despite the invariant recognition structures, NK cells have a significant ability to adapt to their surroundings, such as the presence or absence of MHC class I molecules. It has been assumed that this adaptation occurs during NK cell development, but recent findings show that mature NK cells can also adapt to the presence or absence of MHC class I molecules. Here, we summarize how NK cells adjust to changes in the expression of MHC class I molecules. We propose an extension of existing models, in which MHC class I recognition during NK cell development sequentially instructs and maintains NK cell function. The elucidation of the molecular basis of the two effects may identify ways to improve the fitness of NK cells and to prevent the loss of NK cell function due to persistent alterations in their environment.
Resumo:
This paper analyses the effects of manipulating the cognitive complexity of L2 oral tasks on language production. It specifically focuses on self-repairs, which are taken as a measure of accuracy since they denote both attention to form and an attempt at being accurate. By means of a repeated measures de- sign, 42 lower-intermediate students were asked to perform three different tasks types (a narrative, and instruction-giving task, and a decision-making task) for which two degrees of cognitive complexity were established. The narrative task was manipulated along +/− Here-and-Now, an instruction-giving task ma- nipulated along +/− elements, and the decision-making task which is manipu- lated along +/− reasoning demands. Repeated measures ANOVAs are used for the calculation of differences between degrees of complexity and among task types. One-way ANOVA are used to detect potential differences between low- proficiency and high-proficiency participants. Results show an overall effect of Task Complexity on self-repairs behavior across task types, with different be- haviors existing among the three task types. No differences are found between the self-repair behavior between low and high proficiency groups. Results are discussed in the light of theories of cognition and L2 performance (Robin- son 2001a, 2001b, 2003, 2005, 2007), L1 and L2 language production models (Levelt 1989, 1993; Kormos 2000, 2006), and attention during L2 performance (Skehan 1998; Robinson, 2002).
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
OBJECTIVE: To determine the effect of nonadherence to antiretroviral therapy (ART) on virologic failure and mortality in naive individuals starting ART. DESIGN: Prospective observational cohort study. METHODS: Eligible individuals enrolled in the Swiss HIV Cohort Study, started ART between 2003 and 2012, and provided adherence data on at least one biannual clinical visit. Adherence was defined as missed doses (none, one, two, or more than two) and percentage adherence (>95, 90-95, and <90) in the previous 4 weeks. Inverse probability weighting of marginal structural models was used to estimate the effect of nonadherence on viral failure (HIV-1 viral load >500 copies/ml) and mortality. RESULTS: Of 3150 individuals followed for a median 4.7 years, 480 (15.2%) experienced viral failure and 104 (3.3%) died, 1155 (36.6%) reported missing one dose, 414 (13.1%) two doses and, 333 (10.6%) more than two doses of ART. The risk of viral failure increased with each missed dose (one dose: hazard ratio [HR] 1.15, 95% confidence interval 0.79-1.67; two doses: 2.15, 1.31-3.53; more than two doses: 5.21, 2.96-9.18). The risk of death increased with more than two missed doses (HR 4.87, 2.21-10.73). Missing one to two doses of ART increased the risk of viral failure in those starting once-daily (HR 1.67, 1.11-2.50) compared with those starting twice-daily regimens (HR 0.99, 0.64-1.54, interaction P = 0.09). Consistent results were found for percentage adherence. CONCLUSION: Self-report of two or more missed doses of ART is associated with an increased risk of both viral failure and death. A simple adherence question helps identify patients at risk for negative clinical outcomes and offers opportunities for intervention.
Resumo:
This volume deals with the forms of propaganda and self-representation, through words and images, during the rise of the 'civiltà delle corti' and through processes typical of the time, such as confrontation, adaptation, competition and rivalry. This period, which marked the passage of Italian and European culture from the Middle Ages to the Renaissance, is fundamental in the development of Modern Europe, and it lasted up to the XVIII century and beyond. At the heart of many matters debated here lies the relationship between culture and politics. The formation of a 'Lombard identity', central to the Sinergia project which was the frame of the whole research and its conferences, is closely linked to this broad general context. It places the so called 'questione milanese' - above the traditional hierarchies 'Toscana oriented' - at the centre of many questions regarding Northern Italy as a whole, starting from the dissolution of the Medieval communes, through to the rise of the signorie, from the end of the XIII century and the beginning of the XIV century up to the early XVI century.