17 resultados para driver information systems, genetic algorithms, prediction theory, transportation
em Université de Lausanne, Switzerland
Resumo:
1 6 STRUCTURE OF THIS THESIS -Chapter I presents the motivations of this dissertation by illustrating two gaps in the current body of knowledge that are worth filling, describes the research problem addressed by this thesis and presents the research methodology used to achieve this goal. -Chapter 2 shows a review of the existing literature showing that environment analysis is a vital strategic task, that it shall be supported by adapted information systems, and that there is thus a need for developing a conceptual model of the environment that provides a reference framework for better integrating the various existing methods and a more formal definition of the various aspect to support the development of suitable tools. -Chapter 3 proposes a conceptual model that specifies the various enviromnental aspects that are relevant for strategic decision making, how they relate to each other, and ,defines them in a more formal way that is more suited for information systems development. -Chapter 4 is dedicated to the evaluation of the proposed model on the basis of its application to a concrete environment to evaluate its suitability to describe the current conditions and potential evolution of a real environment and get an idea of its usefulness. -Chapter 5 goes a step further by assembling a toolbox describing a set of methods that can be used to analyze the various environmental aspects put forward by the model and by providing more detailed specifications for a number of them to show how our model can be used to facilitate their implementation as software tools. -Chapter 6 describes a prototype of a strategic decision support tool that allow the analysis of some of the aspects of the environment that are not well supported by existing tools and namely to analyze the relationship between multiple actors and issues. The usefulness of this prototype is evaluated on the basis of its application to a concrete environment. -Chapter 7 finally concludes this thesis by making a summary of its various contributions and by proposing further interesting research directions.
Resumo:
BACKGROUND AND AIMS: Parental history (PH) and genetic risk scores (GRSs) are separately associated with coronary heart disease (CHD), but evidence regarding their combined effects is lacking. We aimed to evaluate the joint associations and predictive ability of PH and GRSs for incident CHD. METHODS: Data for 4283 Caucasians were obtained from the population-based CoLaus Study, over median follow-up time of 5.6 years. CHD was defined as incident myocardial infarction, angina, percutaneous coronary revascularization or bypass grafting. Single nucleotide polymorphisms for CHD identified by genome-wide association studies were used to construct unweighted and weighted versions of three GRSs, comprising of 38, 53 and 153 SNPs respectively. RESULTS: PH was associated with higher values of all weighted GRSs. After adjustment for age, sex, smoking, diabetes, systolic blood pressure, low and high density lipoprotein cholesterol, PH was significantly associated with CHD [HR 2.61, 95% CI (1.47-4.66)] and further adjustment for GRSs did not change this estimate. Similarly, one standard deviation change of the weighted 153-SNPs GRS was significantly associated with CHD [HR 1.50, 95% CI (1.26-1.80)] and remained so, after further adjustment for PH. The weighted, 153-SNPs GRS, but not PH, modestly improved discrimination [(C-index improvement, 0.016), p = 0.048] and reclassification [(NRI improvement, 8.6%), p = 0.027] beyond cardiovascular risk factors. After including both the GRS and PH, model performance improved further [(C-index improvement, 0.022), p = 0.006]. CONCLUSION: After adjustment for cardiovascular risk factors, PH and a weighted, polygenic GRS were jointly associated with CHD and provided additive information for coronary events prediction.
Resumo:
Therapeutic drug monitoring (TDM) and pharmacogenetic tests play a major role in minimising adverse drug reactions and enhancing optimal therapeutic response. The response to medication varies greatly between individuals, according to genetic constitution, age, sex, co-morbidities, environmental factors including diet and lifestyle (e.g. smoking and alcohol intake), and drug-related factors such as pharmacokinetic or pharmacodynamic drug-drug interactions. Most adverse drug reactions are type A reactions, i.e. plasma-level dependent, and represent one of the major causes of hospitalisation, in some cases leading to death. However, they may be avoidable to some extent if pharmacokinetic and pharmacogenetic factors are taken into consideration. This article provides a review of the literature and describes how to apply and interpret TDM and certain pharmacogenetic tests and is illustrated by case reports. An algorithm on the use of TDM and pharmacogenetic tests to help characterise adverse drug reactions is also presented. Although, in the scientific community, differences in drug response are increasingly recognised, there is an urgent need to translate this knowledge into clinical recommendations. Databases on drug-drug interactions and the impact of pharmacogenetic polymorphisms and adverse drug reaction information systems will be helpful to guide clinicians in individualised treatment choices.
Resumo:
Despite the tremendous amount of data collected in the field of ambulatory care, political authorities still lack synthetic indicators to provide them with a global view of health services utilization and costs related to various types of diseases. Moreover, public health indicators fail to provide useful information for physicians' accountability purposes. The approach is based on the Swiss context, which is characterized by the greatest frequency of medical visits in Europe, the highest rate of growth for care expenditure, poor public information but a lot of structured data (new fee system introduced in 2004). The proposed conceptual framework is universal and based on descriptors of six entities: general population, people with poor health, patients, services, resources and effects. We show that most conceptual shortcomings can be overcome and that the proposed indicators can be achieved without threatening privacy protection, using modern cryptographic techniques. Twelve indicators are suggested for the surveillance of the ambulatory care system, almost all based on routinely available data: morbidity, accessibility, relevancy, adequacy, productivity, efficacy (from the points of view of the population, people with poor health, and patients), effectiveness, efficiency, health services coverage and financing. The additional costs of this surveillance system should not exceed Euro 2 million per year (Euro 0.3 per capita).
Resumo:
The use of areal bone mineral density (aBMD) for fracture prediction may be enhanced by considering bone microarchitectural deterioration. Trabecular bone score (TBS) helped in redefining a significant subset of non-osteoporotic women as a higher risk group. INTRODUCTION: TBS is an index of bone microarchitecture. Our goal was to assess the ability of TBS to predict incident fracture. METHODS: TBS was assessed in 560 postmenopausal women from the Os des Femmes de Lyon cohort, who had a lumbar spine (LS) DXA scan (QDR 4500A, Hologic) between years 2000 and 2001. During a mean follow-up of 7.8 ± 1.3 years, 94 women sustained 112 fragility fractures. RESULTS: At the time of baseline DXA scan, women with incident fracture were significantly older (70 ± 9 vs. 65 ± 8 years) and had a lower LS_aBMD and LS_TBS (both -0.4SD, p < 0.001) than women without fracture. The magnitude of fracture prediction was similar for LS_aBMD and LS_TBS (odds ratio [95 % confidence interval] = 1.4 [1.2;1.7] and 1.6 [1.2;2.0]). After adjustment for age and prevalent fracture, LS_TBS remained predictive of an increased risk of fracture. Yet, its addition to age, prevalent fracture, and LS_aBMD did not reach the level of significance to improve the fracture prediction. When using the WHO classification, 39 % of fractures occurred in osteoporotic women, 46 % in osteopenic women, and 15 % in women with T-score > -1. Thirty-seven percent of fractures occurred in the lowest quartile of LS_TBS, regardless of BMD. Moreover, 35 % of fractures that occurred in osteopenic women were classified below this LS_TBS threshold. CONCLUSION: In conclusion, LS_aBMD and LS_TBS predicted fractures equally well. In our cohort, the addition of LS_TBS to age and LS_aBMD added only limited information on fracture risk prediction. However, using the lowest quartile of LS_TBS helped in redefining a significant subset of non-osteoporotic women as a higher risk group which is important for patient management.
Resumo:
The specificities of multinational corporations (MNCs) have to date not been a focus area of IS research. Extant literature mostly proposes IS configurations for specific types of MNCs, following a static and prescriptive approach. Our research seeks to explain the dynamics of global IS design. It suggests a new theoretical lens for studying global IS design by applying the structural adjustment paradigm from organizational change theories. Relying on archetype theory, we conduct a longitudinal case study to theorize the dynamics of IS adaptation. We find that global IS design emerges as an organizational adaptation process to balance interpretative schemes (i.e. the organization's values and beliefs) and structural arrangements (i.e. strategic, organizational, and IS configurations). The resulting insights can be used as a basis to further explore alternative global IS designs and movements between them.
Resumo:
[Table des matières] 1. Introduction. 2. Structure (introduction, hiérarchie). 3. Processus (généralités, flux de clientèle, flux d'activité, flux de ressources, aspects temporels, aspects comptables). 4. Descripteurs (qualification, quantification). 5. Indicateurs (définitions, productivité, pertinence, adéquation, efficacité, effectivité, efficience, standards). 6. Bibliographie.
Resumo:
In the past 20 years the theory of robust estimation has become an important topic of mathematical statistics. We discuss here some basic concepts of this theory with the help of simple examples. Furthermore we describe a subroutine library for the application of robust statistical procedures, which was developed with the support of the Swiss National Science Foundation.
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.
Resumo:
Ventilator-associated pneumonia (VAP) affects mortality, morbidity and cost of critical care. Reliable risk estimation might improve end-of-life decisions, resource allocation and outcome. Several scoring systems for survival prediction have been established and optimised over the last decades. Recently, new biomarkers have gained interest in the prognostic field. We assessed whether midregional pro-atrial natriuretic peptide (MR-proANP) and procalcitonin (PCT) improve the predictive value of the Simplified Acute Physiologic Score (SAPS) II and Sequential Related Organ Failure Assessment (SOFA) in VAP. Specified end-points of a prospective multinational trial including 101 patients with VAP were analysed. Death <28 days after VAP onset was the primary end-point. MR-proANP and PCT were elevated at the onset of VAP in nonsurvivors compared with survivors (p = 0.003 and p = 0.017, respectively) and their slope of decline differed significantly (p = 0.018 and p = 0.039, respectively). Patients with the highest MR-proANP quartile at VAP onset were at increased risk for death (log rank p = 0.013). In a logistic regression model, MR-proANP was identified as the best predictor of survival. Adding MR-proANP and PCT to SAPS II and SOFA improved their predictive properties (area under the curve 0.895 and 0.880). We conclude that the combination of two biomarkers, MR-proANP and PCT, improve survival prediction of clinical severity scores in VAP.
Resumo:
It is estimated that around 230 people die each year due to radon (222Rn) exposure in Switzerland. 222Rn occurs mainly in closed environments like buildings and originates primarily from the subjacent ground. Therefore it depends strongly on geology and shows substantial regional variations. Correct identification of these regional variations would lead to substantial reduction of 222Rn exposure of the population based on appropriate construction of new and mitigation of already existing buildings. Prediction of indoor 222Rn concentrations (IRC) and identification of 222Rn prone areas is however difficult since IRC depend on a variety of different variables like building characteristics, meteorology, geology and anthropogenic factors. The present work aims at the development of predictive models and the understanding of IRC in Switzerland, taking into account a maximum of information in order to minimize the prediction uncertainty. The predictive maps will be used as a decision-support tool for 222Rn risk management. The construction of these models is based on different data-driven statistical methods, in combination with geographical information systems (GIS). In a first phase we performed univariate analysis of IRC for different variables, namely the detector type, building category, foundation, year of construction, the average outdoor temperature during measurement, altitude and lithology. All variables showed significant associations to IRC. Buildings constructed after 1900 showed significantly lower IRC compared to earlier constructions. We observed a further drop of IRC after 1970. In addition to that, we found an association of IRC with altitude. With regard to lithology, we observed the lowest IRC in sedimentary rocks (excluding carbonates) and sediments and the highest IRC in the Jura carbonates and igneous rock. The IRC data was systematically analyzed for potential bias due to spatially unbalanced sampling of measurements. In order to facilitate the modeling and the interpretation of the influence of geology on IRC, we developed an algorithm based on k-medoids clustering which permits to define coherent geological classes in terms of IRC. We performed a soil gas 222Rn concentration (SRC) measurement campaign in order to determine the predictive power of SRC with respect to IRC. We found that the use of SRC is limited for IRC prediction. The second part of the project was dedicated to predictive mapping of IRC using models which take into account the multidimensionality of the process of 222Rn entry into buildings. We used kernel regression and ensemble regression tree for this purpose. We could explain up to 33% of the variance of the log transformed IRC all over Switzerland. This is a good performance compared to former attempts of IRC modeling in Switzerland. As predictor variables we considered geographical coordinates, altitude, outdoor temperature, building type, foundation, year of construction and detector type. Ensemble regression trees like random forests allow to determine the role of each IRC predictor in a multidimensional setting. We found spatial information like geology, altitude and coordinates to have stronger influences on IRC than building related variables like foundation type, building type and year of construction. Based on kernel estimation we developed an approach to determine the local probability of IRC to exceed 300 Bq/m3. In addition to that we developed a confidence index in order to provide an estimate of uncertainty of the map. All methods allow an easy creation of tailor-made maps for different building characteristics. Our work is an essential step towards a 222Rn risk assessment which accounts at the same time for different architectural situations as well as geological and geographical conditions. For the communication of 222Rn hazard to the population we recommend to make use of the probability map based on kernel estimation. The communication of 222Rn hazard could for example be implemented via a web interface where the users specify the characteristics and coordinates of their home in order to obtain the probability to be above a given IRC with a corresponding index of confidence. Taking into account the health effects of 222Rn, our results have the potential to substantially improve the estimation of the effective dose from 222Rn delivered to the Swiss population.
Beyond EA Frameworks: Towards an Understanding of the Adoption of Enterprise Architecture Management
Resumo:
Enterprise architectures (EA) are considered promising approaches to reduce the complexities of growing information technology (IT) environments while keeping pace with an ever-changing business environment. However, the implementation of enterprise architecture management (EAM) has proven difficult in practice. Many EAM initiatives face severe challenges, as demonstrated by the low usage level of enterprise architecture documentation and enterprise architects' lack of authority regarding enforcing EAM standards and principles. These challenges motivate our research. Based on three field studies, we first analyze EAM implementation issues that arise when EAM is started as a dedicated and isolated initiative. Following a design-oriented paradigm, we then suggest a design theory for architecture-driven IT management (ADRIMA) that may guide organizations to successfully implement EAM. This theory summarizes prescriptive knowledge related to embedding EAM practices, artefacts and roles in the existing IT management processes and organization.