962 resultados para De Gennes parameter
Resumo:
Arctic lowland landscapes have been modified by thermokarst lake processes throughout the Holocene. Thermokarst lakes form as a result of ice-rich permafrost degradation and they may expand over time through thermal and mechanical shoreline erosion. We studied proximal and distal sedimentary records from a thermokarst lake located on the Arctic Coastal Plain of northern Alaska to reconstruct the impact of catchment dynamics and morphology on the lacustrine depositional environment and to quantify carbon accumulation in thermokarst lake sediments. Short cores were collected for analysis of pollen, sedimentological and geochemical proxies. Radiocarbon and Pb/Cs dating, as well as extrapolation of measured historic lake expansion rates, were applied to estimate a minimum lake age of ~ 1,400 calendar years BP. The pollen record is in agreement with the young lake age as it does not include evidence of the "alder high" that occurred in the region ~ 4.0 cal ka BP. The lake most likely initiated from a remnant pond in a drained thermokarst lake basin (DTLB) and deepened rapidly as evidenced by accumulation of laminated sediments. Increasing oxygenation of the water column as shown by higher Fe/Ti and Fe/S ratios in the sediment indicate shifts in ice regime with increasing water depth. More recently, the sediment source changed as the thermokarst lake expanded through lateral permafrost degradation, alternating from redeposited DTLB sediments, to increased amounts of sediment from eroding, older upland deposits, followed by a more balanced combination of both DTLB and upland sources. The characterizing shifts in sediment sources and depositional regimes in expanding thermokarst lakes were therefore archived in the thermokarst lake sedimentary record. This study also highlights the potential for Arctic lakes to recycle old carbon from thawing permafrost and thermokarst processes.
Resumo:
Single-molecule manipulation experiments of molecular motors provide essential information about the rate and conformational changes of the steps of the reaction located along the manipulation coordinate. This information is not always sufficient to define a particular kinetic cycle. Recent single-molecule experiments with optical tweezers showed that the DNA unwinding activity of a Phi29 DNA polymerase mutant presents a complex pause behavior, which includes short and long pauses. Here we show that different kinetic models, considering different connections between the active and the pause states, can explain the experimental pause behavior. Both the two independent pause model and the two connected pause model are able to describe the pause behavior of a mutated Phi29 DNA polymerase observed in an optical tweezers single-molecule experiment. For the two independent pause model all parameters are fixed by the observed data, while for the more general two connected pause model there is a range of values of the parameters compatible with the observed data (which can be expressed in terms of two of the rates and their force dependencies). This general model includes models with indirect entry and exit to the long-pause state, and also models with cycling in both directions. Additionally, assuming that detailed balance is verified, which forbids cycling, this reduces the ranges of the values of the parameters (which can then be expressed in terms of one rate and its force dependency). The resulting model interpolates between the independent pause model and the indirect entry and exit to the long-pause state model
Resumo:
The authors thank Professor Iber^e Luiz Caldas for the suggestions and encouragement. The authors F.F.G.d.S., R.M.R., J.C.S., and H.A.A. acknowledge the Brazilian agency CNPq and state agencies FAPEMIG, FAPESP, and FAPESC, and M.S.B. also acknowledges the EPSRC Grant Ref. No. EP/I032606/1.
Resumo:
In this work we explore optimising parameters of a physical circuit model relative to input/output measurements, using the Dallas Rangemaster Treble Booster as a case study. A hybrid metaheuristic/gradient descent algorithm is implemented, where the initial parameter sets for the optimisation are informed by nominal values from schematics and datasheets. Sensitivity analysis is used to screen parameters, which informs a study of the optimisation algorithm against model complexity by fixing parameters. The results of the optimisation show a significant increase in the accuracy of model behaviour, but also highlight several key issues regarding the recovery of parameters.
Resumo:
This paper examines assumptions about future prices used in real estate applications of DCF models. We confirm both the widespread reliance on an ad hoc rule of increasing period-zero capitalization rates by 50 to 100 basis points to obtain terminal capitalization rates and the inability of the rule to project future real estate pricing. To understand how investors form expectations about future prices, we model the spread between the contemporaneously period-zero going-in and terminal capitalization rates and the spread between terminal rates assigned in period zero and going-in rates assigned in period N. Our regression results confirm statistical relationships between the terminal and next holding period going-in capitalization rate spread and the period-zero discount rate, although other economically significant variables are statistically insignificant. Linking terminal capitalization rates by assumption to going-in capitalization rates implies investors view future real estate pricing with myopic expectations. We discuss alternative specifications devoid of such linkage that align more with a rational expectations view of future real estate pricing.
Resumo:
Mathematical models are increasingly used in environmental science thus increasing the importance of uncertainty and sensitivity analyses. In the present study, an iterative parameter estimation and identifiability analysis methodology is applied to an atmospheric model – the Operational Street Pollution Model (OSPMr). To assess the predictive validity of the model, the data is split into an estimation and a prediction data set using two data splitting approaches and data preparation techniques (clustering and outlier detection) are analysed. The sensitivity analysis, being part of the identifiability analysis, showed that some model parameters were significantly more sensitive than others. The application of the determined optimal parameter values was shown to succesfully equilibrate the model biases among the individual streets and species. It was as well shown that the frequentist approach applied for the uncertainty calculations underestimated the parameter uncertainties. The model parameter uncertainty was qualitatively assessed to be significant, and reduction strategies were identified.
Resumo:
A deterministic model of tuberculosis in Cameroon is designed and analyzed with respect to its transmission dynamics. The model includes lack of access to treatment and weak diagnosis capacity as well as both frequency-and density-dependent transmissions. It is shown that the model is mathematically well-posed and epidemiologically reasonable. Solutions are non-negative and bounded whenever the initial values are non-negative. A sensitivity analysis of model parameters is performed and the most sensitive ones are identified by means of a state-of-the-art Gauss-Newton method. In particular, parameters representing the proportion of individuals having access to medical facilities are seen to have a large impact on the dynamics of the disease. The model predicts that a gradual increase of these parameters could significantly reduce the disease burden on the population within the next 15 years.
Resumo:
This article shows a general way to implement recursive functions calculation by linear tail recursion. It emphasizes the use of tail recursion to perform computations efficiently.
Resumo:
La possibilité d’estimer l’impact du changement climatique en cours sur le comportement hydrologique des hydro-systèmes est une nécessité pour anticiper les adaptations inévitables et nécessaires que doivent envisager nos sociétés. Dans ce contexte, ce projet doctoral présente une étude sur l’évaluation de la sensibilité des projections hydrologiques futures à : (i) La non-robustesse de l’identification des paramètres des modèles hydrologiques, (ii) l’utilisation de plusieurs jeux de paramètres équifinaux et (iii) l’utilisation de différentes structures de modèles hydrologiques. Pour quantifier l’impact de la première source d’incertitude sur les sorties des modèles, quatre sous-périodes climatiquement contrastées sont tout d’abord identifiées au sein des chroniques observées. Les modèles sont calés sur chacune de ces quatre périodes et les sorties engendrées sont analysées en calage et en validation en suivant les quatre configurations du Different Splitsample Tests (Klemeš, 1986;Wilby, 2005; Seiller et al. (2012);Refsgaard et al. (2014)). Afin d’étudier la seconde source d’incertitude liée à la structure du modèle, l’équifinalité des jeux de paramètres est ensuite prise en compte en considérant pour chaque type de calage les sorties associées à des jeux de paramètres équifinaux. Enfin, pour évaluer la troisième source d’incertitude, cinq modèles hydrologiques de différents niveaux de complexité sont appliqués (GR4J, MORDOR, HSAMI, SWAT et HYDROTEL) sur le bassin versant québécois de la rivière Au Saumon. Les trois sources d’incertitude sont évaluées à la fois dans conditions climatiques observées passées et dans les conditions climatiques futures. Les résultats montrent que, en tenant compte de la méthode d’évaluation suivie dans ce doctorat, l’utilisation de différents niveaux de complexité des modèles hydrologiques est la principale source de variabilité dans les projections de débits dans des conditions climatiques futures. Ceci est suivi par le manque de robustesse de l’identification des paramètres. Les projections hydrologiques générées par un ensemble de jeux de paramètres équifinaux sont proches de celles associées au jeu de paramètres optimal. Par conséquent, plus d’efforts devraient être investis dans l’amélioration de la robustesse des modèles pour les études d’impact sur le changement climatique, notamment en développant les structures des modèles plus appropriés et en proposant des procédures de calage qui augmentent leur robustesse. Ces travaux permettent d’apporter une réponse détaillée sur notre capacité à réaliser un diagnostic des impacts des changements climatiques sur les ressources hydriques du bassin Au Saumon et de proposer une démarche méthodologique originale d’analyse pouvant être directement appliquée ou adaptée à d’autres contextes hydro-climatiques.