974 resultados para Two-state Potts model
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.
Resumo:
A longitudinal and prospective study was carried out at two state-operated maternity hospitals in Belo Horizonte during 1996 in order to assess the weight of preterm appropriate-for-gestational-age newborns during the first twelve weeks of life. Two hundred and sixty appropriate-for-gestational-age preterm infants with birth weight <2500 g were evaluated weekly. The infants were divided into groups based on birth weight at 250-g intervals. Using weight means, somatic growth curves were constructed and adjusted to Count's model. Absolute (g/day) and relative (g kg-1 day-1) velocity curves were obtained from a derivative of this model. The growth curve was characterized by weight loss during the 1st week (4-6 days) ranging from 5.9 to 13.3% (the greater the percentage, the lower the birth weight), recovery of birth weight within 17 and 21 days, and increasingly higher rates of weight gain after the 3rd week. These rates were proportional to birth weight when expressed as g/day (the lowest and the highest birth weight neonates gained 15.9 and 30.1 g/day, respectively). However, if expressed as g kg-1 day-1, the rates were inversely proportional to birth weight (during the 3rd week, the lowest and the highest weight newborns gained 18.0 and 11.5 g kg-1 day-1, respectively). During the 12th week the rates were similar for all groups (7.5 to 10.2 g kg-1 day-1). The relative velocity accurately reflects weight gain of preterm infants who are appropriate for gestational age and, in the present study, it was inversely proportional to birth weight, with a peak during the 3rd week of life, and a homogeneous behavior during the 12th week for all weight groups.
Resumo:
Torrefaction is moderate thermal treatment (~200-300 °C) of biomass in an inert atmosphere. The torrefied fuel offers advantages to traditional biomass, such as higher heating value, reduced hydrophilic nature, increased its resistance to biological decay, and improved grindability. These factors could, for instance, lead to better handling and storage of biomass and increased use of biomass in pulverized combustors. In this work, we look at several aspects of changes in the biomass during torrefaction. We investigate the fate of carboxylic groups during torrefaction and its dependency to equilibrium moisture content. The changes in the wood components including carbohydrates, lignin, extractable materials and ashforming matters are also studied. And at last, the effect of K on torrefaction is investigated and then modeled. In biomass, carboxylic sites are partially responsible for its hydrophilic characteristic. These sites are degraded to varying extents during torrefaction. In this work, methylene blue sorption and potentiometric titration were applied to measure the concentration of carboxylic groups in torrefied spruce wood. The results from both methods were applicable and the values agreed well. A decrease in the equilibrium moisture content at different humidity was also measured for the torrefied wood samples, which is in good agreement with the decrease in carboxylic group contents. Thus, both methods offer a means of directly measuring the decomposition of carboxylic groups in biomass during torrefaction as a valuable parameter in evaluating the extent of torrefaction. This provides new information to the chemical changes occurring during torrefaction. The effect of torrefaction temperature on the chemistry of birch wood was investigated. The samples were from a pilot plant at Energy research Center of the Netherlands (ECN). And in that way they were representative of industrially produced samples. Sugar analysis was applied to analyze the hemicellulose and cellulose content during torrefaction. The results show a significant degradation of hemicellulose already at 240 °C, while cellulose degradation becomes significant above 270 °C torrefaction. Several methods including Klason lignin method, solid state NMR and Py-GC-MS analyses were applied to measure the changes in lignin during torrefaction. The changes in the ratio of phenyl, guaiacyl and syringyl units show that lignin degrades already at 240 °C to a small extent. To investigate the changes in the extractives from acetone extraction during torrefaction, gravimetric method, HP-SEC and GC-FID followed by GC-MS analysis were performed. The content of acetone-extractable material increases already at 240 °C torrefaction through the degradation of carbohydrate and lignin. The molecular weight of the acetone-extractable material decreases with increasing the torrefaction temperature. The formation of some valuable materials like syringaresinol or vanillin is also observed which is important from biorefinery perspective. To investigate the change in the chemical association of ash-forming elements in birch wood during torrefaction, chemical fractionation was performed on the original and torrefied birch samples. These results give a first understanding of the changes in the association of ashforming elements during torrefaction. The most significant changes can be seen in the distribution of calcium, magnesium and manganese, with some change in water solubility seen in potassium. These changes may in part be due to the destruction of carboxylic groups. In addition to some changes in water and acid solubility of phosphorous, a clear decrease in the concentration of both chlorine and sulfur was observed. This would be a significant additional benefit for the combustion of torrefied biomass. Another objective of this work is studying the impact of organically bound K, Na, Ca and Mn on mass loss of biomass during torrefaction. These elements were of interest because they have been shown to be catalytically active in solid fuels during pyrolysis and/or gasification. The biomasses were first acid washed to remove the ash-forming matters and then organic sites were doped with K, Na, Ca or Mn. The results show that K and Na bound to organic sites can significantly increase the mass loss during torrefaction. It is also seen that Mn bound to organic sites increases the mass loss and Ca addition does not influence the mass loss rate on torrefaction. This increase in mass loss during torrefaction with alkali addition is unlike what has been found in the case of pyrolysis where alkali addition resulted in a reduced mass loss. These results are important for the future operation of torrefaction plants, which will likely be designed to handle various biomasses with significantly different contents of K. The results imply that shorter retention times are possible for high K-containing biomasses. The mass loss of spruce wood with different content of K was modeled using a two-step reaction model based on four kinetic rate constants. The results show that it is possible to model the mass loss of spruce wood doped with different levels of K using the same activation energies but different pre-exponential factors for the rate constants. Three of the pre-exponential factors increased linearly with increasing K content, while one of the preexponential factors decreased with increasing K content. Therefore, a new torrefaction model was formulated using the hemicellulose and cellulose content and K content. The new torrefaction model was validated against the mass loss during the torrefaction of aspen, miscanthus, straw and bark. There is good agreement between the model and the experimental data for the other biomasses, except bark. For bark, the mass loss of acetone extractable material is also needed to be taken into account. The new model can describe the kinetics of mass loss during torrefaction of different types of biomass. This is important for considering fuel flexibility in torrefaction plants.
Resumo:
ABSTRACT Photosynthetic state transitions were investigated in the cyanobacterium Synechococcus sp. PCC 7002 in both wild-type cells and mutant cells lacking phycobilisomes. Preillumination in the presence of DCMU (3(3,4 dichlorophenyl) 1,1 dimethyl urea) induced state 1 and dark adaptation induced state 2 in both wild-type and mutant cells as determined by 77K fluorescence emission spectroscopy. Light-induced transitions were observed in the wildtype after preferential excitation of phycocyanin (state 2) or preferential excitation of chlorophyll .a. (state 1). The state 1 and 2 transitions in the wild-type had half-times of approximately 10 seconds. Cytochrome f and P-700 oxidation kinetics could not be correlated with any current state transition model as cells in state 1 showed faster oxidation kinetics regardless of excitation wavelength. Light-induced transitions were also observed in the phycobilisomeless mutant after preferential excitation of short wavelength chlorophyll !l. (state 2) or carotenoids and long wavelength chlorophyll it (state 1). One-dimensional electrophoresis revealed no significant differences in phosphorylation patterns of resolved proteins between wild-type cells in state 1 and state 2. It is concluded that the mechanism of the light state transition in cyanobacteria does not require the presence of the phycobilisome. The results contradict proposed models for the state transition which require an active role for the phycobilisome.
Resumo:
Les récepteurs couplés aux protéines GRCPG sont une des plus grandes familles de récepteur membranaire codifié par le génome humain et certainement la plus grande famille de récepteurs. Localisés au niveau des membranes plasmiques, ils sont responsables d’une grande variété de réponses cellulaires. L’activation de ces derniers par des ligands était traditionnellement associée à un changement de conformation de la protéine, passant d’un état inactif à un état actif. Toutefois, certaines observations entraient en contradiction avec cette théorie et laissaient supposer la présence de plusieurs conformations actives du récepteur. Ces différentes conformations pouvaient être actives pour certaines voies de signalisation ou de régulation et inactives pour d’autres. Ce phénomène, initialement appelé agoniste dirigé ou « biased agonism », est maintenant décrit comme étant la sélectivité fonctionnelle des ligands des RCPG. Cette sélectivité des voies de signalisation et de régulation permettrait en théorie de développer des ligands capables de cibler seulement les voies de signalisation et de régulation responsable des effets thérapeutiques sans activer les voies responsables des effets secondaires ou indésirables. Le récepteur delta opiacé (DOR) est un RCPG impliqué dans la gestion de la douleur chronique. L’action analgésique de ses ligands est toutefois soumise à un effet de tolérance produite lors de leur utilisation à long terme. Cet effet secondaire limite l’utilisation thérapeutique de ces médicaments. Cette thèse s’est donc intéressée à la sélectivité fonctionnelle des ligands du DOR afin d’évaluer la possibilité de réduire les effets de tolérance produits par ces molécules. En premier lieu, nous avons déterminé que le DOR peut être stabilisé dans plusieurs conformations actives dépendantes du ligand qui le lie et ces conformations possèdent différents profils d’activation des voies de signalisation et de régulation. En deuxième lieu, nous avons déterminé que les différents ligands du DOR stabilisent des conformations du complexe récepteur/protéine G qui ne concordent pas avec la théorie des récepteurs à deux états, suggérant plutôt la présence d’une multitude de conformations actives. Finalement, nous avons démontré que ces différentes conformations interagissaient de façon distincte avec les protéines de régulation des RCPG; le ligand favorisant le retour du récepteur à la membrane produisant moins de désensibilisation et moins de tolérance aiguë à l’analgésie que le ligand favorisant la séquestration du récepteur à l’intérieur de la cellule. Les résultats de cette thèse démontrent que la sélectivité fonctionnelle des ligands opiacés pourrait être utilisée dans le développement de nouveau analgésique produisant moins de tolérance.
Resumo:
Les modèles sur réseau comme ceux de la percolation, d’Ising et de Potts servent à décrire les transitions de phase en deux dimensions. La recherche de leur solution analytique passe par le calcul de la fonction de partition et la diagonalisation de matrices de transfert. Au point critique, ces modèles statistiques bidimensionnels sont invariants sous les transformations conformes et la construction de théories des champs conformes rationnelles, limites continues des modèles statistiques, permet un calcul de la fonction de partition au point critique. Plusieurs chercheurs pensent cependant que le paradigme des théories des champs conformes rationnelles peut être élargi pour inclure les modèles statistiques avec des matrices de transfert non diagonalisables. Ces modèles seraient alors décrits, dans la limite d’échelle, par des théories des champs logarithmiques et les représentations de l’algèbre de Virasoro intervenant dans la description des observables physiques seraient indécomposables. La matrice de transfert de boucles D_N(λ, u), un élément de l’algèbre de Temperley- Lieb, se manifeste dans les théories physiques à l’aide des représentations de connectivités ρ (link modules). L’espace vectoriel sur lequel agit cette représentation se décompose en secteurs étiquetés par un paramètre physique, le nombre d de défauts. L’action de cette représentation ne peut que diminuer ce nombre ou le laisser constant. La thèse est consacrée à l’identification de la structure de Jordan de D_N(λ, u) dans ces représentations. Le paramètre β = 2 cos λ = −(q + 1/q) fixe la théorie : β = 1 pour la percolation et √2 pour le modèle d’Ising, par exemple. Sur la géométrie du ruban, nous montrons que D_N(λ, u) possède les mêmes blocs de Jordan que F_N, son plus haut coefficient de Fourier. Nous étudions la non diagonalisabilité de F_N à l’aide des divergences de certaines composantes de ses vecteurs propres, qui apparaissent aux valeurs critiques de λ. Nous prouvons dans ρ(D_N(λ, u)) l’existence de cellules de Jordan intersectorielles, de rang 2 et couplant des secteurs d, d′ lorsque certaines contraintes sur λ, d, d′ et N sont satisfaites. Pour le modèle de polymères denses critique (β = 0) sur le ruban, les valeurs propres de ρ(D_N(λ, u)) étaient connues, mais les dégénérescences conjecturées. En construisant un isomorphisme entre les modules de connectivités et un sous-espace des modules de spins du modèle XXZ en q = i, nous prouvons cette conjecture. Nous montrons aussi que la restriction de l’hamiltonien de boucles à un secteur donné est diagonalisable et trouvons la forme de Jordan exacte de l’hamiltonien XX, non triviale pour N pair seulement. Enfin nous étudions la structure de Jordan de la matrice de transfert T_N(λ, ν) pour des conditions aux frontières périodiques. La matrice T_N(λ, ν) a des blocs de Jordan intrasectoriels et intersectoriels lorsque λ = πa/b, et a, b ∈ Z×. L’approche par F_N admet une généralisation qui permet de diagnostiquer des cellules intersectorielles dont le rang excède 2 dans certains cas et peut croître indéfiniment avec N. Pour les blocs de Jordan intrasectoriels, nous montrons que les représentations de connectivités sur le cylindre et celles du modèle XXZ sont isomorphes sauf pour certaines valeurs précises de q et du paramètre de torsion v. En utilisant le comportement de la transformation i_N^d dans un voisinage des valeurs critiques (q_c, v_c), nous construisons explicitement des vecteurs généralisés de Jordan de rang 2 et discutons l’existence de blocs de Jordan intrasectoriels de plus haut rang.
Resumo:
Le dioxyde de carbone (CO2) est un résidu naturel du métabolisme cellulaire, la troisième substance la plus abondante du sang, et un important agent vasoactif. À la moindre variation de la teneur en CO2 du sang, la résistance du système vasculaire cérébral et la perfusion tissulaire cérébrale subissent des changements globaux. Bien que les mécanismes exacts qui sous-tendent cet effet restent à être élucidés, le phénomène a été largement exploité dans les études de réactivité vasculaire cérébrale (RVC). Une voie prometteuse pour l’évaluation de la fonction vasculaire cérébrale est la cartographie de la RVC de manière non-invasive grâce à l’utilisation de l’Imagerie par Résonance Magnétique fonctionnelle (IRMf). Des mesures quantitatives et non-invasives de de la RVC peuvent être obtenus avec l’utilisation de différentes techniques telles que la manipu- lation du contenu artériel en CO2 (PaCO2) combinée à la technique de marquage de spin artériel (Arterial Spin Labeling, ASL), qui permet de mesurer les changements de la perfusion cérébrale provoqués par les stimuli vasculaires. Toutefois, les préoccupations liées à la sensibilité et la fiabilité des mesures de la RVC limitent de nos jours l’adoption plus large de ces méthodes modernes de IRMf. J’ai considéré qu’une analyse approfondie ainsi que l’amélioration des méthodes disponibles pourraient apporter une contribution précieuse dans le domaine du génie biomédical, de même qu’aider à faire progresser le développement de nouveaux outils d’imagerie de diagnostique. Dans cette thèse je présente une série d’études où j’examine l’impact des méthodes alternatives de stimulation/imagerie vasculaire sur les mesures de la RVC et les moyens d’améliorer la sensibilité et la fiabilité de telles méthodes. J’ai aussi inclus dans cette thèse un manuscrit théorique où j’examine la possible contribution d’un facteur méconnu dans le phénomène de la RVC : les variations de la pression osmotique du sang induites par les produits de la dissolution du CO2. Outre l’introduction générale (Chapitre 1) et les conclusions (Chapitre 6), cette thèse comporte 4 autres chapitres, au long des quels cinq différentes études sont présentées sous forme d’articles scientifiques qui ont été acceptés à des fins de publication dans différentes revues scientifiques. Chaque chapitre débute par sa propre introduction, qui consiste en une description plus détaillée du contexte motivant le(s) manuscrit(s) associé(s) et un bref résumé des résultats transmis. Un compte rendu détaillé des méthodes et des résultats peut être trouvé dans le(s) dit(s) manuscrit(s). Dans l’étude qui compose le Chapitre 2, je compare la sensibilité des deux techniques ASL de pointe et je démontre que la dernière implémentation de l’ASL continue, la pCASL, offre des mesures plus robustes de la RVC en comparaison à d’autres méthodes pulsés plus âgées. Dans le Chapitre 3, je compare les mesures de la RVC obtenues par pCASL avec l’utilisation de quatre méthodes respiratoires différentes pour manipuler le CO2 artérielle (PaCO2) et je démontre que les résultats peuvent varier de manière significative lorsque les manipulations ne sont pas conçues pour fonctionner dans l’intervalle linéaire de la courbe dose-réponse du CO2. Le Chapitre 4 comprend deux études complémentaires visant à déterminer le niveau de reproductibilité qui peut être obtenu en utilisant des méthodes plus récentes pour la mesure de la RVC. La première étude a abouti à la mise au point technique d’un appareil qui permet des manipulations respiratoires du CO2 de manière simple, sécuritaire et robuste. La méthode respiratoire améliorée a été utilisée dans la seconde étude – de neuro-imagerie – où la sensibilité et la reproductibilité de la RVC, mesurée par pCASL, ont été examinées. La technique d’imagerie pCASL a pu détecter des réponses de perfusion induites par la variation du CO2 dans environ 90% du cortex cérébral humain et la reproductibilité de ces mesures était comparable à celle d’autres mesures hémodynamiques déjà adoptées dans la pratique clinique. Enfin, dans le Chapitre 5, je présente un modèle mathématique qui décrit la RVC en termes de changements du PaCO2 liés à l’osmolarité du sang. Les réponses prédites par ce modèle correspondent étroitement aux changements hémodynamiques mesurés avec pCASL ; suggérant une contribution supplémentaire à la réactivité du système vasculaire cérébral en lien avec le CO2.
Resumo:
The nonlinear dynamics of certain important reaction systems are discussed and analysed in this thesis. The interest in the theoretical and the experimental studies of chemical reactions showing oscillatory dynamics and associated properties is increasing very rapidly. An attempt is made to study some nonlinear phenomena exhibited by the well known chemical oscillator, the BelousovZhabotinskii reaction whose mathematical properties are much in common with the properties of biological oscillators. While extremely complex, this reaction is still much simpler than biological systems at least from the modelling point of view. A suitable model [19] for the system is analysed and the researcher has studied the limit cycle behaviour of the system, for different values of the stoichiometric parameter f, by keeping the value of the reaction rate (k6) fixed at k6 = l. The more complicated three-variable model is stiff in nature.
Resumo:
This paper uses a two-sided market model of hospital competition to study the implications of di§erent remunerations schemes on the physiciansí side. The two-sided market approach is characterized by the concept of common network externality (CNE) introduced by Bardey et al. (2010). This type of externality occurs when occurs when both sides value, possibly with di§erent intensities, the same network externality. We explicitly introduce e§ort exerted by doctors. By increasing the number of medical acts (which involves a costly e§ort) the doctor can increase the quality of service o§ered to patients (over and above the level implied by the CNE). We Örst consider pure salary, capitation or fee-for-service schemes. Then, we study schemes that mix fee-for-service with either salary or capitation payments. We show that salary schemes (either pure or in combination with fee-for-service) are more patient friendly than (pure or mixed) capitations schemes. This comparison is exactly reversed on the providersíside. Quite surprisingly, patients always loose when a fee-for-service scheme is introduced (pure of mixed). This is true even though the fee-for-service is the only way to induce the providers to exert e§ort and it holds whatever the patientsívaluation of this e§ort. In other words, the increase in quality brought about by the fee-for-service is more than compensated by the increase in fees faced by patients.
Resumo:
Severe wind storms are one of the major natural hazards in the extratropics and inflict substantial economic damages and even casualties. Insured storm-related losses depend on (i) the frequency, nature and dynamics of storms, (ii) the vulnerability of the values at risk, (iii) the geographical distribution of these values, and (iv) the particular conditions of the risk transfer. It is thus of great importance to assess the impact of climate change on future storm losses. To this end, the current study employs—to our knowledge for the first time—a coupled approach, using output from high-resolution regional climate model scenarios for the European sector to drive an operational insurance loss model. An ensemble of coupled climate-damage scenarios is used to provide an estimate of the inherent uncertainties. Output of two state-of-the-art global climate models (HadAM3, ECHAM5) is used for present (1961–1990) and future climates (2071–2100, SRES A2 scenario). These serve as boundary data for two nested regional climate models with a sophisticated gust parametrizations (CLM, CHRM). For validation and calibration purposes, an additional simulation is undertaken with the CHRM driven by the ERA40 reanalysis. The operational insurance model (Swiss Re) uses a European-wide damage function, an average vulnerability curve for all risk types, and contains the actual value distribution of a complete European market portfolio. The coupling between climate and damage models is based on daily maxima of 10 m gust winds, and the strategy adopted consists of three main steps: (i) development and application of a pragmatic selection criterion to retrieve significant storm events, (ii) generation of a probabilistic event set using a Monte-Carlo approach in the hazard module of the insurance model, and (iii) calibration of the simulated annual expected losses with a historic loss data base. The climate models considered agree regarding an increase in the intensity of extreme storms in a band across central Europe (stretching from southern UK and northern France to Denmark, northern Germany into eastern Europe). This effect increases with event strength, and rare storms show the largest climate change sensitivity, but are also beset with the largest uncertainties. Wind gusts decrease over northern Scandinavia and Southern Europe. Highest intra-ensemble variability is simulated for Ireland, the UK, the Mediterranean, and parts of Eastern Europe. The resulting changes on European-wide losses over the 110-year period are positive for all layers and all model runs considered and amount to 44% (annual expected loss), 23% (10 years loss), 50% (30 years loss), and 104% (100 years loss). There is a disproportionate increase in losses for rare high-impact events. The changes result from increases in both severity and frequency of wind gusts. Considerable geographical variability of the expected losses exists, with Denmark and Germany experiencing the largest loss increases (116% and 114%, respectively). All countries considered except for Ireland (−22%) experience some loss increases. Some ramifications of these results for the socio-economic sector are discussed, and future avenues for research are highlighted. The technique introduced in this study and its application to realistic market portfolios offer exciting prospects for future research on the impact of climate change that is relevant for policy makers, scientists and economists.
Resumo:
1. We compared the baseline phosphorus (P) concentrations inferred by diatom-P transfer functions and export coefficient models at 62 lakes in Great Britain to assess whether the techniques produce similar estimates of historical nutrient status. 2. There was a strong linear relationship between the two sets of values over the whole total P (TP) gradient (2-200 mu g TP L-1). However, a systematic bias was observed with the diatom model producing the higher values in 46 lakes (of which values differed by more than 10 mu g TP L-1 in 21). The export coefficient model gave the higher values in 10 lakes (of which the values differed by more than 10 mu g TP L-1 in only 4). 3. The difference between baseline and present-day TP concentrations was calculated to compare the extent of eutrophication inferred by the two sets of model output. There was generally poor agreement between the amounts of change estimated by the two approaches. The discrepancy in both the baseline values and the degree of change inferred by the models was greatest in the shallow and more productive sites. 4. Both approaches were applied to two lakes in the English Lake District where long-term P data exist, to assess how well the models track measured P concentrations since approximately 1850. There was good agreement between the pre-enrichment TP concentrations generated by the models. The diatom model paralleled the steeper rise in maximum soluble reactive P (SRP) more closely than the gradual increase in annual mean TP in both lakes. The export coefficient model produced a closer fit to observed annual mean TP concentrations for both sites, tracking the changes in total external nutrient loading. 5. A combined approach is recommended, with the diatom model employed to reflect the nature and timing of the in-lake response to changes in nutrient loading, and the export coefficient model used to establish the origins and extent of changes in the external load and to assess potential reduction in loading under different management scenarios. 6. However, caution must be exercised when applying these models to shallow lakes where the export coefficient model TP estimate will not include internal P loading from lake sediments and where the diatom TP inferences may over-estimate TP concentrations because of the high abundance of benthic taxa, many of which are poor indicators of trophic state.
Resumo:
A life cycle of the Madden–Julian oscillation (MJO) was constructed, based on 21 years of outgoing long-wave radiation data. Regression maps of NCEP–NCAR reanalysis data for the northern winter show statistically significant upper-tropospheric equatorial wave patterns linked to the tropical convection anomalies, and extratropical wave patterns over the North Pacific, North America, the Atlantic, the Southern Ocean and South America. To assess the cause of the circulation anomalies, a global primitive-equation model was initialized with the observed three-dimensional (3D) winter climatological mean flow and forced with a time-dependent heat source derived from the observed MJO anomalies. A model MJO cycle was constructed from the global response to the heating, and both the tropical and extratropical circulation anomalies generally matched the observations well. The equatorial wave patterns are established in a few days, while it takes approximately two weeks for the extratropical patterns to appear. The model response is robust and insensitive to realistic changes in damping and basic state. The model tropical anomalies are consistent with a forced equatorial Rossby–Kelvin wave response to the tropical MJO heating, although it is shifted westward by approximately 20° longitude relative to observations. This may be due to a lack of damping processes (cumulus friction) in the regions of convective heating. Once this shift is accounted for, the extratropical response is consistent with theories of Rossby wave forcing and dispersion on the climatological flow, and the pattern correlation between the observed and modelled extratropical flow is up to 0.85. The observed tropical and extratropical wave patterns account for a significant fraction of the intraseasonal circulation variance, and this reproducibility as a response to tropical MJO convection has implications for global medium-range weather prediction. Copyright © 2004 Royal Meteorological Society
Resumo:
Estimating the magnitude of Agulhas leakage, the volume flux of water from the Indian to the Atlantic Ocean, is difficult because of the presence of other circulation systems in the Agulhas region. Indian Ocean water in the Atlantic Ocean is vigorously mixed and diluted in the Cape Basin. Eulerian integration methods, where the velocity field perpendicular to a section is integrated to yield a flux, have to be calibrated so that only the flux by Agulhas leakage is sampled. Two Eulerian methods for estimating the magnitude of Agulhas leakage are tested within a high-resolution two-way nested model with the goal to devise a mooring-based measurement strategy. At the GoodHope line, a section halfway through the Cape Basin, the integrated velocity perpendicular to that line is compared to the magnitude of Agulhas leakage as determined from the transport carried by numerical Lagrangian floats. In the first method, integration is limited to the flux of water warmer and more saline than specific threshold values. These threshold values are determined by maximizing the correlation with the float-determined time series. By using the threshold values, approximately half of the leakage can directly be measured. The total amount of Agulhas leakage can be estimated using a linear regression, within a 90% confidence band of 12 Sv. In the second method, a subregion of the GoodHope line is sought so that integration over that subregion yields an Eulerian flux as close to the float-determined leakage as possible. It appears that when integration is limited within the model to the upper 300 m of the water column within 900 km of the African coast the time series have the smallest root-mean-square difference. This method yields a root-mean-square error of only 5.2 Sv but the 90% confidence band of the estimate is 20 Sv. It is concluded that the optimum thermohaline threshold method leads to more accurate estimates even though the directly measured transport is a factor of two lower than the actual magnitude of Agulhas leakage in this model.
Resumo:
Remote sensing from space-borne platforms is often seen as an appealing method of monitoring components of the hydrological cycle, including river discharge, due to its spatial coverage. However, data from these platforms is often less than ideal because the geophysical properties of interest are rarely measured directly and the measurements that are taken can be subject to significant errors. This study assimilated water levels derived from a TerraSAR-X synthetic aperture radar image and digital aerial photography with simulations from a two dimensional hydraulic model to estimate discharge, inundation extent, depths and velocities at the confluence of the rivers Severn and Avon, UK. An ensemble Kalman filter was used to assimilate spot heights water levels derived by intersecting shorelines from the imagery with a digital elevation model. Discharge was estimated from the ensemble of simulations using state augmentation and then compared with gauge data. Assimilating the real data reduced the error between analyzed mean water levels and levels from three gauging stations to less than 0.3 m, which is less than typically found in post event water marks data from the field at these scales. Measurement bias was evident, but the method still provided a means of improving estimates of discharge for high flows where gauge data are unavailable or of poor quality. Posterior estimates of discharge had standard deviations between 63.3 m3s-1 and 52.7 m3s-1, which were below 15% of the gauged flows along the reach. Therefore, assuming a roughness uncertainty of 0.03-0.05 and no model structural errors discharge could be estimated by the EnKF with accuracy similar to that arguably expected from gauging stations during flood events. Quality control prior to assimilation, where measurements were rejected for being in areas of high topographic slope or close to tall vegetation and trees, was found to be essential. The study demonstrates the potential, but also the significant limitations of currently available imagery to reduce discharge uncertainty in un-gauged or poorly gauged basins when combined with model simulations in a data assimilation framework.
Resumo:
This study investigated, for the D-2 dopamine receptor, the relation between the ability of agonists and inverse agonists to stabilise different states of the receptor and their relative efficacies. K-i values for agonists were determined in competition, versus the binding of the antagonist [H-3]spiperone. Competition data were fitted best by a two-binding site model (with the exception of bromocriptine, for which a one-binding site model provided the best fit) and agonist affinities for the higher (K-h) (G protein-coupled) and lower affinity (K-l) (G protein-uncoupled) sites determined. Ki values for agonists were also determined in competition versus the binding of the agonist [H-3]N-propylnorapomorphine (NPA) to provide a second estimate of K-h,. Maximal agonist effects (E-max) and their potencies (EC50) were determined from concentration-response curves for agonist stimulation of guanosine-5'-O-(3-[S-32] thiotriphosphate) ([S-35]GTPgammaS) binding. The ability of agonists to stabilise the G protein-coupled state of the receptor (K-l/K-h, determined from ligand-binding assays) did not correlate with either of two measures of relative efficacy (relative E-max, Kl/EC50) of agonists determined in [S-35]GTPgammaS-binding assays, when the data for all of the compounds tested were analysed For a subset of compounds, however, there was a relation between K-l/K-h and E-max.. Competition-binding data versus [H-3]spiperone and [H-3]NPA for a range of inverse agonists were fitted best by a one-binding site model. K-i values for the inverse agonists tested were slightly lower in competition versus [H-3]NPA compared to [H-3]spiperone. These data do not provide support for the idea that inverse agonists act by binding preferentially to the ground state of the receptor. (C) 2004 Elsevier Inc. All rights reserved.