994 resultados para Global sensitivity
Resumo:
The problem of uncertainty propagation in composite laminate structures is studied. An approach based on the optimal design of composite structures to achieve a target reliability level is proposed. Using the Uniform Design Method (UDM), a set of design points is generated over a design domain centred at mean values of random variables, aimed at studying the space variability. The most critical Tsai number, the structural reliability index and the sensitivities are obtained for each UDM design point, using the maximum load obtained from optimal design search. Using the UDM design points as input/output patterns, an Artificial Neural Network (ANN) is developed based on supervised evolutionary learning. Finally, using the developed ANN a Monte Carlo simulation procedure is implemented and the variability of the structural response based on global sensitivity analysis (GSA) is studied. The GSA is based on the first order Sobol indices and relative sensitivities. An appropriate GSA algorithm aiming to obtain Sobol indices is proposed. The most important sources of uncertainty are identified.
Resumo:
When studying hydrological processes with a numerical model, global sensitivity analysis (GSA) is essential if one is to understand the impact of model parameters and model formulation on results. However, different definitions of sensitivity can lead to a difference in the ranking of importance of the different model factors. Here we combine a fuzzy performance function with different methods of calculating global sensitivity to perform a multi-method global sensitivity analysis (MMGSA). We use an application of a finite element subsurface flow model (ESTEL-2D) on a flood inundation event on a floodplain of the River Severn to illustrate this new methodology. We demonstrate the utility of the method for model understanding and show how the prediction of state variables, such as Darcian velocity vectors, can be affected by such a MMGSA. This paper is a first attempt to use GSA with a numerically intensive hydrological model.
Resumo:
Mass loss by glaciers has been an important contributor to sea level rise in the past, and is projected to contribute a substantial fraction of total sea level rise during the 21st century. Here, we use a model of the world's glaciers to quantify equilibrium sensitivities of global glacier mass to climate change, and to investigate the role of changes in glacier hypsometry for long-term mass changes. We find that 21st century glacier-mass loss is largely governed by the glacier's response to 20th century climate change. This limits the influence of 21st century climate change on glacier-mass loss, and explains why there are relatively small differences in glacier-mass loss under greatly different scenarios of climate change. The projected future changes in both temperature and precipitation experienced by glaciers are amplified relative to the global average. The projected increase in precipitation partly compensates for the mass loss caused by warming, but this compensation is negligible at higher temperature anomalies since an increasing fraction of precipitation at the glacier sites is liquid. Loss of low-lying glacier area, and more importantly, eventual complete disappearance of glaciers, strongly limit the projected sea level contribution from glaciers in coming centuries. The adjustment of glacier hypsometry to changes in the forcing strongly reduces the rates of global glacier-mass loss caused by changes in global mean temperature compared to rates of mass loss when hypsometric changes are neglected. This result is a second reason for the relatively weak dependence of glacier-mass loss on future climate scenario, and helps explain why glacier-mass loss in the first half of the 20th century was of the same order of magnitude as in the second half of the 20th century, even though the rate of warming was considerably smaller.
Resumo:
When studying hydrological processes with a numerical model, global sensitivity analysis (GSA) is essential if one is to understand the impact of model parameters and model formulation on results. However, different definitions of sensitivity can lead to a difference in the ranking of importance of the different model factors. Here we combine a fuzzy performance function with different methods of calculating global sensitivity to perform a multi-method global sensitivity analysis (MMGSA). We use an application of a finite element subsurface flow model (ESTEL-2D) on a flood inundation event on a floodplain of the River Severn to illustrate this new methodology. We demonstrate the utility of the method for model understanding and show how the prediction of state variables, such as Darcian velocity vectors, can be affected by such a MMGSA. This paper is a first attempt to use GSA with a numerically intensive hydrological model
Resumo:
We estimate the effects of climatic changes, as predicted by six climate models, on lake surface temperatures on a global scale, using the lake surface equilibrium temperature as a proxy. We evaluate interactions between different forcing variables, the sensitivity of lake surface temperatures to these variables, as well as differences between climate zones. Lake surface equilibrium temperatures are predicted to increase by 70 to 85 % of the increase in air temperatures. On average, air temperature is the main driver for changes in lake surface temperatures, and its effect is reduced by ~10 % by changes in other meteorological variables. However, the contribution of these other variables to the variance is ~40 % of that of air temperature, and their effects can be important at specific locations. The warming increases the importance of longwave radiation and evaporation for the lake surface heat balance compared to shortwave radiation and convective heat fluxes. We discuss the consequences of our findings for the design and evaluation of different types of studies on climate change effects on lakes.
Resumo:
The influence of uncertainties of input parameters on output response of composite structures is investigated in this paper. In particular, the effects of deviations in mechanical properties, ply angles, ply thickness and on applied loads are studied. The uncertainty propagation and the importance measure of input parameters are analysed using three different approaches: a first-order local method, a Global Sensitivity Analysis (GSA) supported by a variance-based method and an extension of local variance to estimate the global variance over the domain of inputs. Sample results are shown for a shell composite laminated structure built with different composite systems including multi-materials. The importance measures of input parameters on structural response based on numerical results are established and discussed as a function of the anisotropy of composite materials. Needs for global variance methods are discussed by comparing the results obtained from different proposed methodologies. The objective of this paper is to contribute for the use of GSA techniques together with low expensive local importance measures.
Resumo:
A global river routing scheme coupled to the ECMWF land surface model is implemented and tested within the framework of the Global Soil Wetness Project II, to evaluate the feasibility of modelling global river runoff at a daily time scale. The exercise is designed to provide benchmark river runoff predictions needed to verify the land surface model. Ten years of daily runoff produced by the HTESSEL land surface scheme is input into the TRIP2 river routing scheme in order to generate daily river runoff. These are then compared to river runoff observations from the Global Runoff Data Centre (GRDC) in order to evaluate the potential and the limitations. A notable source of inaccuracy is bias between observed and modelled discharges which is not primarily due to the modelling system but instead of to the forcing and quality of observations and seems uncorrelated to the river catchment size. A global sensitivity analysis and Generalised Likelihood Uncertainty Estimation (GLUE) uncertainty analysis are applied to the global routing model. The ground water delay parameter is identified as being the most sensitive calibration parameter. Significant uncertainties are found in results, and those due to parameterisation of the routing model are quantified. The difficulty involved in parameterising global river discharge models is discussed. Detailed river runoff simulations are shown for the river Danube, which match well observed river runoff in upstream river transects. Results show that although there are errors in runoff predictions, model results are encouraging and certainly indicative of useful runoff predictions, particularly for the purpose of verifying the land surface scheme hydrologicly. Potential of this modelling system on future applications such as river runoff forecasting and climate impact studies is highlighted. Copyright © 2009 Royal Meteorological Society.
Resumo:
One of the biggest challenges that contaminant hydrogeology is facing, is how to adequately address the uncertainty associated with model predictions. Uncertainty arise from multiple sources, such as: interpretative error, calibration accuracy, parameter sensitivity and variability. This critical issue needs to be properly addressed in order to support environmental decision-making processes. In this study, we perform Global Sensitivity Analysis (GSA) on a contaminant transport model for the assessment of hydrocarbon concentration in groundwater. We provide a quantification of the environmental impact and, given the incomplete knowledge of hydrogeological parameters, we evaluate which are the most influential, requiring greater accuracy in the calibration process. Parameters are treated as random variables and a variance-based GSA is performed in a optimized numerical Monte Carlo framework. The Sobol indices are adopted as sensitivity measures and they are computed by employing meta-models to characterize the migration process, while reducing the computational cost of the analysis. The proposed methodology allows us to: extend the number of Monte Carlo iterations, identify the influence of uncertain parameters and lead to considerable saving computational time obtaining an acceptable accuracy.
Resumo:
As radiculopatias lombares referem-se a um processo patológico envolvendo as raízes nervosas espinais, causando sintomas radiculares ao nível dos membros inferiores. As respostas-F são ondas tardias que advêm de descargas recorrentes de neurónios motores despolarizados antidromicamente, que podem ser úteis na avaliação de lesões radiculares. Com o objetivo de avaliar a utilidade dos estudos de condução nervosa e respostas- F do nervo peronial no diagnóstico de radiculopatia de L5 e a sua correlação com o seu grau de gravidade, foram estudados 47 sujeitos que padeciam de radiculopatia de L5 e foram comparados com um grupo de controlo, constituído por 28 sujeitos saudáveis. Foram estudadas as amplitudes do PAMC do nervo peronial profundo, do PANS do peronial superficial, bem como as latências mínima, média e máxima, cronodispersão e persistência das Respostas-F. Foi realizada ainda uma avaliação eletromiográfica, com o intuito de classificar o acometimento da raiz em termos de gravidade. Registaram-se diferenças significativas entre os dois grupos na amplitude do PAMC do peronial profundo (p<0,0001), na F-mínima, F-média, F-máxima e cronodispersão (p<0,0001), e ainda na persistência (p 0,014). Todos estes parâmetros também se correlacionaram significativamente com o grau de gravidade da radiculopatia, sendo as latências das ondas-F e a cronodispersão progressivamente maiores nos sujeitos com um grau de afeção mais grave, e a persistência e a amplitude dos estudos de condução motora mais baixas. O fator que se evidenciou mais sensível no diagnóstico desta patologia foi a latência-máxima das ondas-F, 31,25%, e o menos sensível a persistência, apenas alterada em 9,34% dos indivíduos. Tendo em consideração todos os parâmetros avaliados nas respostas-F atingiu-se uma sensibilidade global desta técnica de 42,19%. Com esta investigação concluímos que os estudos de condução nervosa e as respostas-F poderão ser úteis como complemento na avaliação das radiculopatias lombares, apresentando uma sensibilidade considerável para esta patologia. Não se deve limitar o estudo desta técnica à avaliação das latências mínima das ondas-F, mas incluir sempre todos os outros parâmetros, aumentando assim a sua sensibilidade. Estas técnicas deverão ser incluídas no estudo das radiculopatias de L5.
Resumo:
OBJECTIVE To assess the validity of dengue fever reports and how they relate to the definition of case and severity. METHODS Diagnostic test assessment was conducted using cross-sectional sampling from a universe of 13,873 patients treated during the fifth epidemiological period in health institutions from 11 Colombian departments in 2013. The test under analyses was the reporting to the National Public Health Surveillance System, and the reference standard was the review of histories identified by active institutional search. We reviewed all histories of patients diagnosed with dengue fever, as well as a random sample of patients with febrile syndromes. The specificity and sensitivity of reports were estimated for this purpose, considering the inverse of the probability of being selected for weighting. The concordance between reporting and the findings of the active institutional search was calculated using Kappa statistics. RESULTS We included 4,359 febrile patients, and 31.7% were classified as compatible with dengue fever (17 with severe dengue fever; 461 with dengue fever and warning signs; 904 with dengue fever and no warning signs). The global sensitivity of reports was 13.2% (95%CI 10.9;15.4) and specificity was 98.4% (95%CI 97.9;98.9). Sensitivity varied according to severity: 12.1% (95%CI 9.3;14.8) for patients presenting dengue fever with no warning signs; 14.5% (95%CI 10.6;18.4) for those presenting dengue fever with warning signs, and 40.0% (95%CI 9.6;70.4) for those with severe dengue fever. Concordance between reporting and the findings of the active institutional search resulted in a Kappa of 10.1%. CONCLUSIONS Low concordance was observed between reporting and the review of clinical histories, which was associated with the low reporting of dengue fever compatible cases, especially milder cases.
Resumo:
Purpose: To examine the relationship of functional measurements with structural measures. Methods: 146 eyes of 83 test subjects underwent Heidelberg Retinal Tomography (HRTIII) (disc area<2.43, mphsd<40), and perimetry testing with Octopus (SAP; Dynamic), Pulsar (PP; TOP) and Moorfields MDT (ESTA). Glaucoma was defined as progressive structural or functional loss (20 eyes). Perimetry test points were grouped into 6 sectors based on the estimated optic nerve head angle into which the associated nerve fiber bundle enters (Garway-Heath map). Perimetry summary measures (PSM) (MD SAP/ MD PP/ PTD MDT) were calculated from the average total deviation of each measured threshold from the normal for each sector. We calculated the 95% significance level of the sectorial PSM from the respective normative data. We calculated the percentage agreement with group1 (G1), healthy on HRT and within normal perimetric limits, and group 2 (G2), abnormal on HRT and outside normal perimetric limits. We also examined the relationship of PSM and rim area (RA) in those sectors classified as abnormal by MRA (Moorfields Regression Analysis) of HRT. Results: The mean age was 65 (range= [37, 89]). The global sensitivity versus specificity of each instrument in detecting glaucomatous eyes was: MDT 80% vs. 88%, SAP 80% vs. 80%, PP 70% vs. 89% and HRT 80% vs. 79%. Highest percentage agreement of HRT (respectively G1, G2, sector) with PSM were MDT (89%, 57%, nasal superior), SAP (83%, 74%, temporal superior), PP (74%, 63%, nasal superior). Globally percentage agreement (respectively G1, G2) was MDT (92%, 28%), SAP (87%, 40%) and PP (77%, 49%). Linear regression showed there was no significant trend globally associating RA and PSM. However, sectorally the supero-nasal sector had a statistically significant (p<0.001) trend with each instrument, the associated r2 coefficients are (MDT 0.38 SAP 0.56 and PP 0.39). Conclusions: There were no significant differences in global sensitivity or specificity between instruments. Structure-function relationships varied significantly between instruments and were consistently strongest supero-nasally. Further studies are required to investigate these relationships in detail.
Resumo:
OBJECTIVE: To determine the sensitivity of ultrasonography in screening for foetal malformations in the pregnant women of the Swiss Canton of Vaud. STUDY DESIGN: Retrospective study over a period of five years. METHOD: We focused our study on 512 major or minor clinically relevant malformations detectable by ultrasonography. We analysed the global sensitivity of the screening and compared the performance of the tertiary centre with that of practitioners working in private practice or regional hospitals. RESULTS: Among the 512 malformations, 181 (35%) involved the renal and urinary tract system, 137 (27%) the heart, 71 (14%) the central nervous system, 50 (10%) the digestive system, 42 (8%) the face and 31 (6%) the limbs. Global sensitivity was 54.5%. The lowest detection rate was observed for cardiac anomalies, with only 23% correct diagnoses. The tertiary centre achieved a 75% detection rate in its outpatient clinic and 83% in referred patients. Outside the referral centre, the diagnostic rate attained 47%. CONCLUSIONS: Routine foetal examination by ultrasonography in a low-risk population can detect foetal structural abnormalities. Apart from the diagnosis of cardiac abnormalities, the results in the Canton of Vaud are satisfactory and justify routine screening for malformations in a low-risk population. A prerequisite is continuing improvement in the skills of ultrasonographers through medical education.
Resumo:
Les propriétés pharmacocinétiques d’un nouveau médicament et les risques d’interactions médicamenteuses doivent être investigués très tôt dans le processus de recherche et développement. L’objectif principal de cette thèse était de concevoir des approches prédictives de modélisation du devenir du médicament dans l’organisme en présence et en absence de modulation d’activité métabolique et de transport. Le premier volet de recherche consistait à intégrer dans un modèle pharmacocinétique à base physiologique (PBPK), le transport d’efflux membranaire gouverné par les glycoprotéines-P (P-gp) dans le cœur et le cerveau. Cette approche, basée sur des extrapolations in vitro-in vivo, a permis de prédire la distribution tissulaire de la dompéridone chez des souris normales et des souris déficientes pour les gènes codant pour la P-gp. Le modèle a confirmé le rôle protecteur des P-gp au niveau cérébral, et a suggéré un rôle négligeable des P-gp dans la distribution tissulaire cardiaque pour la dompéridone. Le deuxième volet de cette recherche était de procéder à l’analyse de sensibilité globale (ASG) du modèle PBPK précédemment développé, afin d’identifier les paramètres importants impliqués dans la variabilité des prédictions, tout en tenant compte des corrélations entre les paramètres physiologiques. Les paramètres importants ont été identifiés et étaient principalement les paramètres limitants des mécanismes de transport à travers la membrane capillaire. Le dernier volet du projet doctoral consistait à développer un modèle PBPK apte à prédire les profils plasmatiques et paramètres pharmacocinétiques de substrats de CYP3A administrés par voie orale à des volontaires sains, et de quantifier l’impact d’interactions médicamenteuses métaboliques (IMM) sur la pharmacocinétique de ces substrats. Les prédictions des profils plasmatiques et des paramètres pharmacocinétiques des substrats des CYP3A ont été très comparables à ceux mesurés lors d’études cliniques. Quelques écarts ont été observés entre les prédictions et les profils plasmatiques cliniques mesurés lors d’IMM. Cependant, l’impact de ces inhibitions sur les paramètres pharmacocinétiques des substrats étudiés et l’effet inhibiteur des furanocoumarins contenus dans le jus de pamplemousse ont été prédits dans un intervalle d’erreur très acceptable. Ces travaux ont contribué à démontrer la capacité des modèles PBPK à prédire les impacts pharmacocinétiques des interactions médicamenteuses avec une précision acceptable et prometteuse.
Resumo:
L'hétérogénéité de réponses dans un groupe de patients soumis à un même régime thérapeutique doit être réduite au cours d'un traitement ou d'un essai clinique. Deux approches sont habituellement utilisées pour atteindre cet objectif. L'une vise essentiellement à construire une observance active. Cette approche se veut interactive et fondée sur l'échange ``médecin-patient '', ``pharmacien-patient'' ou ``vétérinaire-éleveurs''. L'autre plutôt passive et basée sur les caractéristiques du médicament, vise à contrôler en amont cette irrégularité. L'objectif principal de cette thèse était de développer de nouvelles stratégies d'évaluation et de contrôle de l'impact de l'irrégularité de la prise du médicament sur l'issue thérapeutique. Plus spécifiquement, le premier volet de cette recherche consistait à proposer des algorithmes mathématiques permettant d'estimer efficacement l'effet des médicaments dans un contexte de variabilité interindividuelle de profils pharmacocinétiques (PK). Cette nouvelle méthode est fondée sur l'utilisation concommitante de données \textit{in vitro} et \textit{in vivo}. Il s'agit de quantifier l'efficience ( c-à-dire efficacité plus fluctuation de concentrations \textit{in vivo}) de chaque profil PK en incorporant dans les modèles actuels d'estimation de l'efficacité \textit{in vivo}, la fonction qui relie la concentration du médicament de façon \textit{in vitro} à l'effet pharmacodynamique. Comparativement aux approches traditionnelles, cette combinaison de fonction capte de manière explicite la fluctuation des concentrations plasmatiques \textit{in vivo} due à la fonction dynamique de prise médicamenteuse. De plus, elle soulève, à travers quelques exemples, des questions sur la pertinence de l'utilisation des indices statiques traditionnels ($C_{max}$, $AUC$, etc.) d'efficacité comme outil de contrôle de l'antibiorésistance. Le deuxième volet de ce travail de doctorat était d'estimer les meilleurs temps d'échantillonnage sanguin dans une thérapie collective initiée chez les porcs. Pour ce faire, nous avons développé un modèle du comportement alimentaire collectif qui a été par la suite couplé à un modèle classique PK. À l'aide de ce modèle combiné, il a été possible de générer un profil PK typique à chaque stratégie alimentaire particulière. Les données ainsi générées, ont été utilisées pour estimer les temps d'échantillonnage appropriés afin de réduire les incertitudes dues à l'irrégularité de la prise médicamenteuse dans l'estimation des paramètres PK et PD . Parmi les algorithmes proposés à cet effet, la méthode des médianes semble donner des temps d'échantillonnage convenables à la fois pour l'employé et pour les animaux. Enfin, le dernier volet du projet de recherche a consisté à proposer une approche rationnelle de caractérisation et de classification des médicaments selon leur capacité à tolérer des oublis sporadiques. Méthodologiquement, nous avons, à travers une analyse globale de sensibilité, quantifié la corrélation entre les paramètres PK/PD d'un médicament et l'effet d'irrégularité de la prise médicamenteuse. Cette approche a consisté à évaluer de façon concomitante l'influence de tous les paramètres PK/PD et à prendre en compte, par la même occasion, les relations complexes pouvant exister entre ces différents paramètres. Cette étude a été réalisée pour les inhibiteurs calciques qui sont des antihypertenseurs agissant selon un modèle indirect d'effet. En prenant en compte les valeurs des corrélations ainsi calculées, nous avons estimé et proposé un indice comparatif propre à chaque médicament. Cet indice est apte à caractériser et à classer les médicaments agissant par un même mécanisme pharmacodynamique en terme d'indulgence à des oublis de prises médicamenteuses. Il a été appliqué à quatre inhibiteurs calciques. Les résultats obtenus étaient en accord avec les données expérimentales, traduisant ainsi la pertinence et la robustesse de cette nouvelle approche. Les stratégies développées dans ce projet de doctorat sont essentiellement fondées sur l'analyse des relations complexes entre l'histoire de la prise médicamenteuse, la pharmacocinétique et la pharmacodynamique. De cette analyse, elles sont capables d'évaluer et de contrôler l'impact de l'irrégularité de la prise médicamenteuse avec une précision acceptable. De façon générale, les algorithmes qui sous-tendent ces démarches constitueront sans aucun doute, des outils efficients dans le suivi et le traitement des patients. En outre, ils contribueront à contrôler les effets néfastes de la non-observance au traitement par la mise au point de médicaments indulgents aux oublis