77 resultados para Numerical Uncertainty

em Université de Lausanne, Switzerland


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In groundwater applications, Monte Carlo methods are employed to model the uncertainty on geological parameters. However, their brute-force application becomes computationally prohibitive for highly detailed geological descriptions, complex physical processes, and a large number of realizations. The Distance Kernel Method (DKM) overcomes this issue by clustering the realizations in a multidimensional space based on the flow responses obtained by means of an approximate (computationally cheaper) model; then, the uncertainty is estimated from the exact responses that are computed only for one representative realization per cluster (the medoid). Usually, DKM is employed to decrease the size of the sample of realizations that are considered to estimate the uncertainty. We propose to use the information from the approximate responses for uncertainty quantification. The subset of exact solutions provided by DKM is then employed to construct an error model and correct the potential bias of the approximate model. Two error models are devised that both employ the difference between approximate and exact medoid solutions, but differ in the way medoid errors are interpolated to correct the whole set of realizations. The Local Error Model rests upon the clustering defined by DKM and can be seen as a natural way to account for intra-cluster variability; the Global Error Model employs a linear interpolation of all medoid errors regardless of the cluster to which the single realization belongs. These error models are evaluated for an idealized pollution problem in which the uncertainty of the breakthrough curve needs to be estimated. For this numerical test case, we demonstrate that the error models improve the uncertainty quantification provided by the DKM algorithm and are effective in correcting the bias of the estimate computed solely from the MsFV results. The framework presented here is not specific to the methods considered and can be applied to other combinations of approximate models and techniques to select a subset of realizations

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Shallow upland drains, grips, have been hypothesized as responsible for increased downstream flow magnitudes. Observations provide counterfactual evidence, often relating to the difficulty of inferring conclusions from statistical correlation and paired catchment comparisons, and the complexity of designing field experiments to test grip impacts at the catchment scale. Drainage should provide drier antecedent moisture conditions, providing more storage at the start of an event; however, grips have higher flow velocities than overland flow, thus potentially delivering flow more rapidly to the drainage network. We develop and apply a model for assessing the impacts of grips on flow hydrographs. The model was calibrated on the gripped case, and then the gripped case was compared with the intact case by removing all grips. This comparison showed that even given parameter uncertainty, the intact case had significantly higher flood peaks and lower baseflows, mirroring field observations of the hydrological response of intact peat. The simulations suggest that this is because delivery effects may not translate into catchment-scale impacts for three reasons. First, in our case, the proportions of flow path lengths that were hillslope were not changed significantly by gripping. Second, the structure of the grip network as compared with the structure of the drainage basin mitigated against grip-related increases in the concentration of runoff in the drainage network, although it did marginally reduce the mean timing of that concentration at the catchment outlet. Third, the effect of the latter upon downstream flow magnitudes can only be assessed by reference to the peak timing of other tributary basins, emphasizing that drain effects are both relative and scale dependent. However, given the importance of hillslope flow paths, we show that if upland drainage causes significant changes in surface roughness on hillslopes, then critical and important feedbacks may impact upon the speed of hydrological response. Copyright (c) 2012 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PECUBE is a three-dimensional thermal-kinematic code capable of solving the heat production-diffusion-advection equation under a temporally varying surface boundary condition. It was initially developed to assess the effects of time-varying surface topography (relief) on low-temperature thermochronological datasets. Thermochronometric ages are predicted by tracking the time-temperature histories of rock-particles ending up at the surface and by combining these with various age-prediction models. In the decade since its inception, the PECUBE code has been under continuous development as its use became wider and addressed different tectonic-geomorphic problems. This paper describes several major recent improvements in the code, including its integration with an inverse-modeling package based on the Neighborhood Algorithm, the incorporation of fault-controlled kinematics, several different ways to address topographic and drainage change through time, the ability to predict subsurface (tunnel or borehole) data, prediction of detrital thermochronology data and a method to compare these with observations, and the coupling with landscape-evolution (or surface-process) models. Each new development is described together with one or several applications, so that the reader and potential user can clearly assess and make use of the capabilities of PECUBE. We end with describing some developments that are currently underway or should take place in the foreseeable future. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

1. Species distribution modelling is used increasingly in both applied and theoretical research to predict how species are distributed and to understand attributes of species' environmental requirements. In species distribution modelling, various statistical methods are used that combine species occurrence data with environmental spatial data layers to predict the suitability of any site for that species. While the number of data sharing initiatives involving species' occurrences in the scientific community has increased dramatically over the past few years, various data quality and methodological concerns related to using these data for species distribution modelling have not been addressed adequately. 2. We evaluated how uncertainty in georeferences and associated locational error in occurrences influence species distribution modelling using two treatments: (1) a control treatment where models were calibrated with original, accurate data and (2) an error treatment where data were first degraded spatially to simulate locational error. To incorporate error into the coordinates, we moved each coordinate with a random number drawn from the normal distribution with a mean of zero and a standard deviation of 5 km. We evaluated the influence of error on the performance of 10 commonly used distributional modelling techniques applied to 40 species in four distinct geographical regions. 3. Locational error in occurrences reduced model performance in three of these regions; relatively accurate predictions of species distributions were possible for most species, even with degraded occurrences. Two species distribution modelling techniques, boosted regression trees and maximum entropy, were the best performing models in the face of locational errors. The results obtained with boosted regression trees were only slightly degraded by errors in location, and the results obtained with the maximum entropy approach were not affected by such errors. 4. Synthesis and applications. To use the vast array of occurrence data that exists currently for research and management relating to the geographical ranges of species, modellers need to know the influence of locational error on model quality and whether some modelling techniques are particularly robust to error. We show that certain modelling techniques are particularly robust to a moderate level of locational error and that useful predictions of species distributions can be made even when occurrence data include some error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To determine the diagnostic accuracy of physicians' prior probability estimates of serious infection in critically ill neonates and children, we conducted a prospective cohort study in 2 intensive care units. Using available clinical, laboratory, and radiographic information, 27 physicians provided 2567 probability estimates for 347 patients (follow-up rate, 92%). The median probability estimate of infection increased from 0% (i.e., no antibiotic treatment or diagnostic work-up for sepsis), to 2% on the day preceding initiation of antibiotic therapy, to 20% at initiation of antibiotic treatment (P<.001). At initiation of treatment, predictions discriminated well between episodes subsequently classified as proven infection and episodes ultimately judged unlikely to be infection (area under the curve, 0.88). Physicians also showed a good ability to predict blood culture-positive sepsis (area under the curve, 0.77). Treatment and testing thresholds were derived from the provided predictions and treatment rates. Physicians' prognoses regarding the presence of serious infection were remarkably precise. Studies investigating the value of new tests for diagnosis of sepsis should establish that they add incremental value to physicians' judgment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present and apply a new three-dimensional model for the prediction of canopy-flow and turbulence dynamics in open-channel flow. The approach uses a dynamic immersed boundary technique that is coupled in a sequentially staggered manner to a large eddy simulation. Two different biomechanical models are developed depending on whether the vegetation is dominated by bending or tensile forces. For bending plants, a model structured on the Euler-Bernoulli beam equation has been developed, whilst for tensile plants, an N-pendula model has been developed. Validation against flume data shows good agreement and demonstrates that for a given stem density, the models are able to simulate the extraction of energy from the mean flow at the stem-scale which leads to the drag discontinuity and associated mixing layer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Empirical modeling of exposure levels has been popular for identifying exposure determinants in occupational hygiene. Traditional data-driven methods used to choose a model on which to base inferences have typically not accounted for the uncertainty linked to the process of selecting the final model. Several new approaches propose making statistical inferences from a set of plausible models rather than from a single model regarded as 'best'. This paper introduces the multimodel averaging approach described in the monograph by Burnham and Anderson. In their approach, a set of plausible models are defined a priori by taking into account the sample size and previous knowledge of variables influent on exposure levels. The Akaike information criterion is then calculated to evaluate the relative support of the data for each model, expressed as Akaike weight, to be interpreted as the probability of the model being the best approximating model given the model set. The model weights can then be used to rank models, quantify the evidence favoring one over another, perform multimodel prediction, estimate the relative influence of the potential predictors and estimate multimodel-averaged effects of determinants. The whole approach is illustrated with the analysis of a data set of 1500 volatile organic compound exposure levels collected by the Institute for work and health (Lausanne, Switzerland) over 20 years, each concentration having been divided by the relevant Swiss occupational exposure limit and log-transformed before analysis. Multimodel inference represents a promising procedure for modeling exposure levels that incorporates the notion that several models can be supported by the data and permits to evaluate to a certain extent model selection uncertainty, which is seldom mentioned in current practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent technological advances in remote sensing have enabled investigation of the morphodynamics and hydrodynamics of large rivers. However, measuring topography and flow in these very large rivers is time consuming and thus often constrains the spatial resolution and reach-length scales that can be monitored. Similar constraints exist for computational fluid dynamics (CFD) studies of large rivers, requiring maximization of mesh-or grid-cell dimensions and implying a reduction in the representation of bedform-roughness elements that are of the order of a model grid cell or less, even if they are represented in available topographic data. These ``subgrid'' elements must be parameterized, and this paper applies and considers the impact of roughness-length treatments that include the effect of bed roughness due to ``unmeasured'' topography. CFD predictions were found to be sensitive to the roughness-length specification. Model optimization was based on acoustic Doppler current profiler measurements and estimates of the water surface slope for a variety of roughness lengths. This proved difficult as the metrics used to assess optimal model performance diverged due to the effects of large bedforms that are not well parameterized in roughness-length treatments. However, the general spatial flow patterns are effectively predicted by the model. Changes in roughness length were shown to have a major impact upon flow routing at the channel scale. The results also indicate an absence of secondary flow circulation cells in the reached studied, and suggest simpler two-dimensional models may have great utility in the investigation of flow within large rivers. Citation: Sandbach, S. D. et al. (2012), Application of a roughness-length representation to parameterize energy loss in 3-D numerical simulations of large rivers, Water Resour. Res., 48, W12501, doi: 10.1029/2011WR011284.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The unstable rock slope, Stampa, above the village of Flåm, Norway, shows signs of both active and postglacial gravitational deformation over an area of 11 km2. Detailed structural field mapping, annual differential Global Navigation Satellite System (GNSS) surveys, as well as geomorphic analysis of high-resolution digital elevation models based on airborne and terrestrial laser scanning indicate that slope deformation is complex and spatially variable. Numerical modeling was used to investigate the influence of former rockslide activity and to better understand the failure mechanism. Field observations, kinematic analysis and numerical modeling indicate a strong structural control of the unstable area. Based on the integration of the above analyses, we propose that the failure mechanism is dominated by (1) a toppling component, (2) subsiding bilinear wedge failure and (3) planar sliding along the foliation at the toe of the unstable slope. Using differential GNSS, 18 points were measured annually over a period of up to 6 years. Two of these points have an average yearly movement of around 10 mm/year. They are located at the frontal cliff on almost completely detached blocks with volumes smaller than 300,000 m3. Large fractures indicate deep-seated gravitational deformation of volumes reaching several 100 million m3, but the movement rates in these areas are below 2 mm/year. Two different lobes of prehistoric rock slope failures were dated with terrestrial cosmogenic nuclides. While the northern lobe gave an average age of 4,300 years BP, the southern one resulted in two different ages (2,400 and 12,000 years BP), which represent most likely multiple rockfall events. This reflects the currently observable deformation style with unstable blocks in the northern part in between Joasete and Furekamben and no distinct blocks but a high rockfall activity around Ramnanosi in the south. With a relative susceptibility analysis it is concluded that small collapses of blocks along the frontal cliff will be more frequent. Larger collapses of free-standing blocks along the cliff with volumes > 100,000 m3, thus large enough to reach the fjord, cannot be ruled out. A larger collapse involving several million m3 is presently considered of very low likelihood.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we present numerical simulations of continuous flow left ventricle assist device implantation with the aim of comparing difference in flow rates and pressure patterns depending on the location of the anastomosis and the rotational speed of the device. Despite the fact that the descending aorta anastomosis approach is less invasive, since it does not require a sternotomy and a cardiopulmonary bypass, its benefits are still controversial. Moreover, the device rotational speed should be correctly chosen to avoid anomalous flow rates and pressure distribution in specific location of the cardiovascular tree. With the aim of assessing the differences between these two approaches and device rotational speed in terms of flow rate and pressure waveforms, we set up numerical simulations of network of one-dimensional models where we account for the presence of an outflow cannula anastomosed to different locations of the aorta. Then, we use the resulting network to compare the results of the two different cannulations for several stages of heart failure and different rotational speed of the device. The inflow boundary data for the heart and the cannulas are obtained from a lumped parameters model of the entire circulatory system with an assist device, which is validated with clinical data. The results show that ascending and descending aorta cannulations lead to similar waveforms and mean flow rate in all the considered cases. Moreover, regardless of the anastomosis region, the rotational speed of the device has an important impact on wave profiles; this effect is more pronounced at high RPM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis consists of four essays in equilibrium asset pricing. The main topic is investors' heterogeneity: I investigates the equilibrium implications for the financial markets when investors have different attitudes toward risk. The first chapter studies why expected risk and remuneration on the aggregate market are negatively related, even if intuition and standard theory suggest a positive relation. I show that the negative trade-off can obtain in equilibrium if investors' beliefs about economic fundamentals are procyclically biased and the market Sharpe ratio is countercyclical. I verify that such conditions hold in the real markets and I find empirical support for the risk-return dynamics predicted by the model. The second chapter consists of two essays. The first essay studies how het¬erogeneity in risk preferences interacts with other sources of heterogeneity and how this affects asset prices in equilibrium. Using perceived macroeconomic un¬certainty as source of heterogeneity, the model helps to explain some patterns of financial returns, even if heterogeneity is small as suggested by survey data. The second essay determines conditions such that equilibrium prices have analytical solutions when investors have heterogeneous risk attitudes and macroeconomic fundamentals feature latent uncertainty. This approach provides additional in-sights to the previous literature where models require numerical solutions. The third chapter studies why equity claims (i.e. assets paying a single future dividend) feature premia and risk decreasing with the horizon, even if standard models imply the opposite shape. I show that labor relations helps to explain the puzzle. When workers have bargaining power to exploit partial income insurance within the firm, wages are smoother and dividends are riskier than in a standard economy. Distributional risk among workers and shareholders provides a rationale to the equity short-term risk, which leads to downward sloping term structures of premia and risk for equity claim. Résumé Cette thèse se compose de quatre essais dans l'évaluation des actifs d'équilibre. Le sujet principal est l'hétérogénéité des investisseurs: J'étudie les implications d'équilibre pour les marchés financiers où les investisseurs ont des attitudes différentes face au risque. Le première chapitre étudie pourquoi attendus risque et la rémunération sur le marché global sont liées négativement, même si l'intuition et la théorie standard suggèrent une relation positive. Je montre que le compromis négatif peut obtenir en équilibre si les croyances des investisseurs sur les fondamentaux économiques sont procyclique biaisées et le ratio de Sharpe du marché est anticyclique. Je vérifier que ces conditions sont réalisées dans les marchés réels et je trouve un appui empirique à la dynamique risque-rendement prédites par le modèle. Le deuxième chapitre se compose de deux essais. Le première essai étudie com¬ment hétérogénéité dans les préférences de risque inter agit avec d'autres sources d'hétérogénéité et comment cela affecte les prix des actifs en équilibre. Utili¬sation de l'incertitude macroéconomique perù comme source d'hétérogénéité, le modèle permet d'expliquer certaines tendances de rendements financiers, même si l'hétérogénéité est faible comme suggéré par les données d'enquête. Le deuxième essai détermine des conditions telles que les prix d'équilibre disposer de solutions analytiques lorsque les investisseurs ont des attitudes des risques hétérogènes et les fondamentaux macroéconomiques disposent d'incertitude latente. Cette approche fournit un éclairage supplémentaire à la littérature antérieure où les modèles nécessitent des solutions numériques. Le troisième chapitre étudie pourquoi les equity-claims (actifs que paient un seul dividende futur) ont les primes et le risque décroissante avec l'horizon, mme si les modèles standards impliquent la forme opposée. Je montre que les relations de travail contribue à expliquer l'énigme. Lorsque les travailleurs ont le pouvoir de négociation d'exploiter assurance revenu partiel dans l'entreprise, les salaires sont plus lisses et les dividendes sont plus risqués que dans une économie standard. Risque de répartition entre les travailleurs et les actionnaires fournit une justification à le risque à court terme, ce qui conduit à des term-structures en pente descendante des primes et des risques pour les equity-claims.