965 resultados para deduced optical model parameters
Resumo:
Risks of significant infant drug exposure through human milk arepoorly defined due to lack of large-scale PK data. We propose to useBayesian approach based on population PK (popPK)-guided modelingand simulation for risk prediction. As a proof-of-principle study, weexploited fluoxetine milk concentration data from 25 women. popPKparameters including milk-to-plasma ratio (MP ratio) were estimatedfrom the best model. The dose of fluoxetine the breastfed infant wouldreceive through mother's milk, and infant plasma concentrations wereestimated from 1000 simulated mother-infant pairs, using randomassignment of feeding times and milk volume. A conservative estimateof CYP2D6 activity of 20% of the allometrically-adjusted adult valuewas assumed. Derived model parameters, including MP ratio were consistentwith those reported in the literature. Visual predictive check andother model diagnostics showed no signs of model misspecifications.The model simulation predicted that infant exposure levels to fluoxetinevia mother's milk were below 10% of weight-adjusted maternal therapeuticdoses in >99% of simulated infants. Predicted median ratio ofinfant-mother serum levels at steady state was 0.093 (range 0.033-0.31),consistent with literature reported values (mean=0.07; range 0-0.59).Predicted incidence of relatively high infant-mother ratio (>0.2) ofsteady-state serum fluoxetine concentrations was <1.3%. Overall, ourpredictions are consistent with clinical observations. Our approach maybe valid for other drugs, allowing in silico prediction of infant drugexposure risks through human milk. We will discuss application of thisapproach to another drug used in lactating women.
Resumo:
We have previously demonstrated disease-dependent gene delivery in the brain using an AAV vector responding to NFκB activation as a probe for inflammatory responses. This vector, injected focally in the parenchyma prior to a systemic kainic acid (KA) injection mediated inducible transgene expression in the hippocampus but not in the cerebellum, regions, respectively, known to be affected or not by the pathology. However, such a focal approach relies on previous knowledge of the model parameters and does not allow to predict the whole brain response to the disease. Global brain gene delivery would allow to predict the regional distribution of the pathology as well as to deliver therapeutic factors in all affected brain regions. We show that self-complementary AAV2/9 (scAAV2/9) delivery in the adult rat cisterna magna allows a widespread but not homogenous transduction of the brain. Indeed, superficial regions, i.e., cortex, hippocampus, and cerebellum were more efficiently transduced than deeper regions, such as striatum, and substantia nigra. These data suggest that viral particles penetration from the cerebrospinal fluid (CSF) into the brain is a limiting factor. Interestingly, AAV2/9-2YF a rationally designed capsid mutant (affecting surface tyrosines) increased gene transfer efficiency approximately fivefold. Neurons, astrocytes, and oligodendrocytes, but not microglia, were transduced in varying proportions depending on the brain region and the type of capsid. Finally, after a single intracisternal injection of scAAV2/9-2YF using the NFκB-inducible promoter, KA treatment induced transgene expression in the hippocampus and cortex but not in the cerebellum, corresponding to the expression of the CD11b marker of microglial activation. These data support the use of disease-inducible vectors administered in the cisterna magna as a tool to characterize the brain pathology in systemic drug-induced or transgenic disease models. However, further improvements are required to enhance viral particles penetration into the brain.
Resumo:
AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.
Resumo:
Context There are no evidence syntheses available to guide clinicians on when to titrate antihypertensive medication after initiation. Objective To model the blood pressure (BP) response after initiating antihypertensive medication. Data sources electronic databases including Medline, Embase, Cochrane Register and reference lists up to December 2009. Study selection Trials that initiated antihypertensive medication as single therapy in hypertensive patients who were either drug naive or had a placebo washout from previous drugs. Data extraction Office BP measurements at a minimum of two weekly intervals for a minimum of 4 weeks. An asymptotic approach model of BP response was assumed and non-linear mixed effects modelling used to calculate model parameters. Results and conclusions Eighteen trials that recruited 4168 patients met inclusion criteria. The time to reach 50% of the maximum estimated BP lowering effect was 1 week (systolic 0.91 weeks, 95% CI 0.74 to 1.10; diastolic 0.95, 0.75 to 1.15). Models incorporating drug class as a source of variability did not improve fit of the data. Incorporating the presence of a titration schedule improved model fit for both systolic and diastolic pressure. Titration increased both the predicted maximum effect and the time taken to reach 50% of the maximum (systolic 1.2 vs 0.7 weeks; diastolic 1.4 vs 0.7 weeks). Conclusions Estimates of the maximum efficacy of antihypertensive agents can be made early after starting therapy. This knowledge will guide clinicians in deciding when a newly started antihypertensive agent is likely to be effective or not at controlling BP.
Resumo:
This paper describes the application of the Soil and Water Assessment Tool (SWAT) model to the Maquoketa River watershed, located in northeast Iowa. The inputs to the model were obtained from the Environmental Protection Agency’s geographic information/database system called Better Assessment Science Integrating Point and Nonpoint Sources (BASINS). Climatic data from six weather stations located in and around the watershed, and measured streamflow data from a U.S. Geological Survey gage station at the watershed outlet were used in the sensitivity analysis of SWAT model parameters as well as its calibration and validation for watershed hydrology and streamflow. A sensitivity analysis was performed using an influence coefficient method to evaluate surface runoff and base flow variations in response to changes in model input hydrologic parameters. The curve number, evaporation compensation factor, and soil available water capacity were found to be the most sensitive parameters among eight selected parameters when applying SWAT to the Maquoketa River watershed. Model calibration, facilitated by the sensitivity analysis, was performed for the period 1988 through 1993, and validation was performed for 1982 through 1987. The model performance was evaluated by well-established statistical methods and was found to explain at least 86% and 69% of the variability in the measured stream flow data for the calibration and validation periods, respectively. This initial hydrologic modeling analysis will facilitate future applications of SWAT to the Maquoketa River watershed for various watershed analysis, including water quality.
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.
Resumo:
There has been a recent revolution in the ability to manipulate micrometer-sized objects on surfaces patterned by traps or obstacles of controllable configurations and shapes. One application of this technology is to separate particles driven across such a surface by an external force according to some particle characteristic such as size or index of refraction. The surface features cause the trajectories of particles driven across the surface to deviate from the direction of the force by an amount that depends on the particular characteristic, thus leading to sorting. While models of this behavior have provided a good understanding of these observations, the solutions have so far been primarily numerical. In this paper we provide analytic predictions for the dependence of the angle between the direction of motion and the external force on a number of model parameters for periodic as well as random surfaces. We test these predictions against exact numerical simulations.
Resumo:
Microstructure imaging from diffusion magnetic resonance (MR) data represents an invaluable tool to study non-invasively the morphology of tissues and to provide a biological insight into their microstructural organization. In recent years, a variety of biophysical models have been proposed to associate particular patterns observed in the measured signal with specific microstructural properties of the neuronal tissue, such as axon diameter and fiber density. Despite very appealing results showing that the estimated microstructure indices agree very well with histological examinations, existing techniques require computationally very expensive non-linear procedures to fit the models to the data which, in practice, demand the use of powerful computer clusters for large-scale applications. In this work, we present a general framework for Accelerated Microstructure Imaging via Convex Optimization (AMICO) and show how to re-formulate this class of techniques as convenient linear systems which, then, can be efficiently solved using very fast algorithms. We demonstrate this linearization of the fitting problem for two specific models, i.e. ActiveAx and NODDI, providing a very attractive alternative for parameter estimation in those techniques; however, the AMICO framework is general and flexible enough to work also for the wider space of microstructure imaging methods. Results demonstrate that AMICO represents an effective means to accelerate the fit of existing techniques drastically (up to four orders of magnitude faster) while preserving accuracy and precision in the estimated model parameters (correlation above 0.9). We believe that the availability of such ultrafast algorithms will help to accelerate the spread of microstructure imaging to larger cohorts of patients and to study a wider spectrum of neurological disorders.
Resumo:
Numerous sources of evidence point to the fact that heterogeneity within the Earth's deep crystalline crust is complex and hence may be best described through stochastic rather than deterministic approaches. As seismic reflection imaging arguably offers the best means of sampling deep crustal rocks in situ, much interest has been expressed in using such data to characterize the stochastic nature of crustal heterogeneity. Previous work on this problem has shown that the spatial statistics of seismic reflection data are indeed related to those of the underlying heterogeneous seismic velocity distribution. As of yet, however, the nature of this relationship has remained elusive due to the fact that most of the work was either strictly empirical or based on incorrect methodological approaches. Here, we introduce a conceptual model, based on the assumption of weak scattering, that allows us to quantitatively link the second-order statistics of a 2-D seismic velocity distribution with those of the corresponding processed and depth-migrated seismic reflection image. We then perform a sensitivity study in order to investigate what information regarding the stochastic model parameters describing crustal velocity heterogeneity might potentially be recovered from the statistics of a seismic reflection image using this model. Finally, we present a Monte Carlo inversion strategy to estimate these parameters and we show examples of its application at two different source frequencies and using two different sets of prior information. Our results indicate that the inverse problem is inherently non-unique and that many different combinations of the vertical and lateral correlation lengths describing the velocity heterogeneity can yield seismic images with the same 2-D autocorrelation structure. The ratio of all of these possible combinations of vertical and lateral correlation lengths, however, remains roughly constant which indicates that, without additional prior information, the aspect ratio is the only parameter describing the stochastic seismic velocity structure that can be reliably recovered.
Resumo:
Chronic pain is a complex disabling experience that negatively affects the cognitive, affective and physical functions as well as behavior. Although the interaction between chronic pain and physical functioning is a well-accepted paradigm in clinical research, the understanding of how pain affects individuals' daily life behavior remains a challenging task. Here we develop a methodological framework allowing to objectively document disruptive pain related interferences on real-life physical activity. The results reveal that meaningful information is contained in the temporal dynamics of activity patterns and an analytical model based on the theory of bivariate point processes can be used to describe physical activity behavior. The model parameters capture the dynamic interdependence between periods and events and determine a 'signature' of activity pattern. The study is likely to contribute to the clinical understanding of complex pain/disease-related behaviors and establish a unified mathematical framework to quantify the complex dynamics of various human activities.
Resumo:
We present an agent-based model with the aim of studying how macro-level dynamics of spatial distances among interacting individuals in a closed space emerge from micro-level dyadic and local interactions. Our agents moved on a lattice (referred to as a room) using a model implemented in a computer program called P-Space in order to minimize their dissatisfaction, defined as a function of the discrepancy between the real distance and the ideal, or desired, distance between agents. Ideal distances evolved in accordance with the agent's personal and social space, which changed throughout the dynamics of the interactions among the agents. In the first set of simulations we studied the effects of the parameters of the function that generated ideal distances, and in a second set we explored how group macrolevel behavior depended on model parameters and other variables. We learned that certain parameter values yielded consistent patterns in the agents' personal and social spaces, which in turn led to avoidance and approaching behaviors in the agents. We also found that the spatial behavior of the group of agents as a whole was influenced by the values of the model parameters, as well as by other variables such as the number of agents. Our work demonstrates that the bottom-up approach is a useful way of explaining macro-level spatial behavior. The proposed model is also shown to be a powerful tool for simulating the spatial behavior of groups of interacting individuals.
Resumo:
The objective of this work was to identify factors associated with the 56-day non-return rate (56-NRR) in dairy herds in the Galician region, Spain, and to estimate it for individual Holstein bulls. The experiment was carried out in herds originated from North-West Spain, from September 2008 to August 2009. Data of the 76,440 first inseminations performed during this period were gathered. Candidate factors were tested for their association with the 56-NRR by using a logistic model (binomial). Afterwards, 37 sires with a minimum of 150 first performed inseminations were individually evaluated. Logistic models were also estimated for each bull, and predicted individual 56-NRR rate values were calculated as a solution for the model parameters. Logistic regression found four major factors associated with 56-NRR in lactating cows: age at insemination, days from calving to insemination, milk production level at the time of insemination, and herd size. First-service conception rate, when a particular sire was used, was higher for heifers (0.71) than for lactating cows (0.52). Non-return rates were highly variable among bulls. Asignificant part of the herd-level variation of 56-NRR of Holstein cattle seems attributable to the service sire. High correlation level between observed and predicted 56-NRR was found.
Resumo:
Crystal growth is an essential phase in crystallization kinetics. The rate of crystal growth provides significant information for the design and control of crystallization processes; nevertheless, obtaining accurate growth rate data is still challenging due to a number of factors that prevail in crystal growth. In industrial crystallization, crystals are generally grown from multi-componentand multi-particle solutions under complicated hydrodynamic conditions; thus, it is crucial to increase the general understanding of the growth kinetics in these systems. The aim of this work is to develop a model of the crystal growth rate from solution. An extensive literature review of crystal growth focuses on themodelling of growth kinetics and thermodynamics, and new measuring techniques that have been introduced in the field of crystallization. The growth of a singlecrystal is investigated in binary and ternary systems. The binary system consists of potassium dihydrogen phosphate (KDP, crystallizing solute) and water (solvent), and the ternary system includes KDP, water and an organic admixture. The studied admixtures, urea, ethanol and 1-propanol, are employed at relatively highconcentrations (of up to 5.0 molal). The influence of the admixtures on the solution thermodynamics is studied using the Pitzer activity coefficient model. Theprediction method of the ternary solubility in the studied systems is introduced and verified. The growth rate of the KDP (101) face in the studied systems aremeasured in the growth cell as a function of supersaturation, the admixture concentration, the solution velocity over a crystal and temperature. In addition, the surface morphology of the KDP (101) face is studied using ex situ atomic force microscopy (AFM). The crystal growth rate in the ternary systems is modelled on the basis of the two-step growth model that contains the Maxwell-Stefan (MS) equations and a surface-reaction model. This model is used together with measuredcrystal growth rate data to develop a new method for the evaluation of the model parameters. The validation of the model is justified with experiments. The crystal growth rate in an imperfectly mixed suspension crystallizer is investigatedusing computational fluid dynamics (CFD). A solid-liquid suspension flow that includes multi-sized particles is described by the multi-fluid model as well as by a standard k-epsilon turbulence model and an interface momentum transfer model. The local crystal growth rate is determined from calculated flow information in a diffusion-controlled crystal growth regime. The calculated results are evaluated experimentally.
Resumo:
Työssä tutkittiin jalometallien selektiivistä erottamista kloridiliuoksista synteettisten polymeerihartsien avulla. Laboratoriokokeissa keskityttiin tutkimaan kullan erottamista hydrofiilisen polymetakrylaattipohjaisen adsorbentin avulla. Lähtökohtana oli platinarikaste, joka sisälsi kullan lisäksi platinaa, palladiumia, hopeaa, kuparia, rautaa, vismuttia, seleeniä ja telluuria. Mittauksissa tutkittiin eri metallien ja puolimetallien adsorptiota hartsiin tasapaino-, kinetiikka- ja kolonnikokeilla. Työssä käytettiin myös adsorption simulointiin monikomponenttierotuksen dynaamiseen mallintamiseen tarkoitettua tietokoneohjelmaa, johon tarvittavat parametrit estimoitiin kokeellisen datan avulla. Tasapainokokeet yhtä metallia sisältäneistä liuoksista osoittivat, että hartsi adsorboi tehokkaasti kultaa kaikissa tutkituissa suolahappopitoisuuksissa (1-6 M). Kulta muodostaa hartsiin hyvin adsorboituvia tetrakloroauraatti(III)ioneja, [AuCl4]-, jotka ovat erittäin stabiileja pieniin kloridipitoisuuksiin saakka. Suolahappopitoisuudella oli merkitystä ainoastaan raudan adsorptioon, joka kasvoi huomattavasti suolahappopitoisuuden noustessa johtuen raudan taipumuksesta muodostaa hyvin adsorboituvia [FeCl4]--ioneja väkevissä suolahappopitoisuuksissa. Muiden tutkittujen alkuaineiden adsorptiot jäivät alhaisiksi kaikilla suolahappopitoisuuksilla. Rikasteliuoksella tehdyt tasapainokokeet osoittivat, että adsorptiokapasiteetti kullalle riippuu voimakkaasti muista läsnäolevista komponenteista. Kilpaileva adsorptio kuvattiin Langmuir-Freundlich-isotermillä. Kolonnikokeet osoittivat, että hartsi adsorboi kullan lisäksi hieman myös rautaa ja telluuria, jotka saatiin kuitenkin eluoitua hartsista täysin 5 M suolahappopesulla ja sitä seuraavalla 1 M suolahappopesulla. Tehokkaaksi liuokseksi kullan desorboimiseen osoittautui asetonin ja 1 M suolahapon seos. Kolonnierotuksen eri vaiheet pystyttiin tyydyttävästi kuvaamaan simulointimallilla.
Resumo:
This work presents new, efficient Markov chain Monte Carlo (MCMC) simulation methods for statistical analysis in various modelling applications. When using MCMC methods, the model is simulated repeatedly to explore the probability distribution describing the uncertainties in model parameters and predictions. In adaptive MCMC methods based on the Metropolis-Hastings algorithm, the proposal distribution needed by the algorithm learns from the target distribution as the simulation proceeds. Adaptive MCMC methods have been subject of intensive research lately, as they open a way for essentially easier use of the methodology. The lack of user-friendly computer programs has been a main obstacle for wider acceptance of the methods. This work provides two new adaptive MCMC methods: DRAM and AARJ. The DRAM method has been built especially to work in high dimensional and non-linear problems. The AARJ method is an extension to DRAM for model selection problems, where the mathematical formulation of the model is uncertain and we want simultaneously to fit several different models to the same observations. The methods were developed while keeping in mind the needs of modelling applications typical in environmental sciences. The development work has been pursued while working with several application projects. The applications presented in this work are: a winter time oxygen concentration model for Lake Tuusulanjärvi and adaptive control of the aerator; a nutrition model for Lake Pyhäjärvi and lake management planning; validation of the algorithms of the GOMOS ozone remote sensing instrument on board the Envisat satellite of European Space Agency and the study of the effects of aerosol model selection on the GOMOS algorithm.