144 resultados para Estimated parameters
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.
Resumo:
BACKGROUND: We assessed the prevalence of risk factors for cardiovascular disease (CVD) in a middle-income country in rapid epidemiological transition and estimated direct costs for treating all individuals at increased cardiovascular risk, i.e. following the so-called "high risk strategy". METHODS: Survey of risk factors using an age- and sex-stratified random sample of the population of Seychelles aged 25-64 in 2004. Assessment of CVD risk and treatment modalities were in line with international guidelines. Costs are expressed as USD per capita per year. RESULTS: 1255 persons took part in the survey (participation rate of 80.2%). Prevalence of main risk factors was: 39.6% for high blood pressure (> or =140/90 mmHg or treatment) of which 59% were under treatment; 24.2% for high cholesterol (> or =6.2 mmol/l); 20.8% for low HDL-cholesterol (<1.0 mmol/l); 9.3% for diabetes (fasting glucose > or =7.0 mmol/l); 17.5% for smoking; 25.1% for obesity (body mass index > or =30 kg/m2) and 22.1% for the metabolic syndrome. Overall, 43% had HBP, high cholesterol or diabetes and substantially increased CVD risk. The cost for medications needed to treat all high-risk individuals amounted to USD 45.6, i.e. 11.2 dollars for high blood pressure, 3.8 dollars for diabetes, and 30.6 dollars for dyslipidemia (using generic drugs except for hypercholesterolemia). Cost for minimal follow-up medical care and laboratory tests amounted to 22.6 dollars. CONCLUSION: High prevalence of major risk factors was found in a rapidly developing country and costs for treatment needed to reduce risk factors in all high-risk individuals exceeded resources generally available in low or middle income countries. Our findings emphasize the need for affordable cost-effective treatment strategies and the critical importance of population strategies aimed at reducing risk factors in the entire population.
Resumo:
PURPOSE: To evaluate the prognostic factors and the ophthalmologic follow-up on cataract formation following total body irradiation (TBI) prior to bone marrow transplantation (BMT). METHODS AND MATERIALS: Between 1980 and 1992, 494 patients were referred to our department for TBI prior to BMT. The mean age was 32 +/- 11 (median: 32, range: 2-63) years and the male to female ratio was 1.6 (304:190). The majority of patients were treated for acute leukemia (lymphoblastic, n = 177, 36%; or nonlymphoblastic , n = 139, 28%); 80 (16%) for chronic myeloid leukemia, 60 (12%) for non-Hodgkin's lymphoma, 23 (5%) for multiple myeloma, and 15 (3%) for other malignancies. Two hundred and fifty-four (51%) patients were grafted in the first complete remission (CR), 118 (24%) in second CR. Allogenic BMT was performed in 210 (43%) patients, and autologous BMT in 284 (57%). Methotrexate combined to steroids (n = 47, 22%) or to cyclosporine (n = 163, 78%) was administered for graft-versus-host disease (GvHD) prophylaxis. In 188 patients (38%), heparin was used in the prevention of veno-occlusive disease (VOD) of the liver. Furthermore, steroid administration was registered in 223 (45%). The conditioning chemotherapy consisted of cyclophosphamide (Cy) alone in 332 (67%) patients. Total-body irradiation was administered either in single dose (STBI; 10 Gy in 1 day, n = 291) or in six fractions (FTBI; 12 Gy over 3 consecutive days, n = 203) before BMT. The mean instantaneous dose rate was 0.0574 +/- 0.0289 Gy/min (0.024-0.1783). It was < 0.048 Gy/min in 157 patients (LOW group), > or = 0.048 Gy/min and <0.09 Gy/min in 301 patients (MEDIUM group), and > or = 0.09 Gy/min in 36 patients (HIGH group). RESULTS: When considering all patients, 42 (8.5%) patients developed cataracts after 13 to 72 months (median: 42 months) with a 5-year estimated cataract incidence (ECI) of 23%. Thirty-three (11.3%) out of 291 patients in the STBI group, and 9 (4.4%) out of 203 patients in the FTBI group developed cataracts with 5-year estimated incidences of 34 and 11%, respectively (p = 0.0004). Seven (19.4%) out of 36 patients in the HIGH group, 33 (10.9%) out of 301 in the MEDIUM group, and 2 (1.2%) out of 157 in the LOW group developed cataracts with respective 5-year cataract incidences of 54%, 30%, and 3.5% (HIGH vs. MEDIUM, p = 0.07; MEDIUM vs. LOW, p = 0.0001; HIGH vs. LOW, p < 0.0001). On the other hand, patients who received heparin as prophylactic treatment against VOD of the liver had less cataracts than those who did not receive (5-year ECI of 16% vs. 28%, respectively; p = 0.01). There was no statistically significant difference in terms of 5-year ECI according to age, sex, administration of steroids, GvHD prophylaxis, type of BMT, or previous cranial radiotherapy in children. Multivariate analysis revealed that the instantaneous dose rate (p = 0.001), and the administration of heparin against VOD (p = 0.05) were the two independent factors influencing the cataract incidence, while age, fractionation, and use of steroids were not. Among the 42 patients who developed cataracts, 38 had bilateral extracapsular cataract extraction and intraocular lens implantation, and only 4 (10%) developed secondary cataracts in a median follow-up period of 39 months. CONCLUSION: Among the abovementioned TBI parameters, high instantaneous dose rate seems to be the main risk factor of cataract formation, and the administration of heparin appears to have a protective role in cataractogenesis. On the other hand, ionizing radiation seems to have a protective effect on posterior capsule opacification following extracapsular cataract extraction and intraocular lens implantation.
Resumo:
Developing a novel technique for the efficient, noninvasive clinical evaluation of bone microarchitecture remains both crucial and challenging. The trabecular bone score (TBS) is a new gray-level texture measurement that is applicable to dual-energy X-ray absorptiometry (DXA) images. Significant correlations between TBS and standard 3-dimensional (3D) parameters of bone microarchitecture have been obtained using a numerical simulation approach. The main objective of this study was to empirically evaluate such correlations in anteroposterior spine DXA images. Thirty dried human cadaver vertebrae were evaluated. Micro-computed tomography acquisitions of the bone pieces were obtained at an isotropic resolution of 93μm. Standard parameters of bone microarchitecture were evaluated in a defined region within the vertebral body, excluding cortical bone. The bone pieces were measured on a Prodigy DXA system (GE Medical-Lunar, Madison, WI), using a custom-made positioning device and experimental setup. Significant correlations were detected between TBS and 3D parameters of bone microarchitecture, mostly independent of any correlation between TBS and bone mineral density (BMD). The greatest correlation was between TBS and connectivity density, with TBS explaining roughly 67.2% of the variance. Based on multivariate linear regression modeling, we have established a model to allow for the interpretation of the relationship between TBS and 3D bone microarchitecture parameters. This model indicates that TBS adds greater value and power of differentiation between samples with similar BMDs but different bone microarchitectures. It has been shown that it is possible to estimate bone microarchitecture status derived from DXA imaging using TBS.
Resumo:
In groundwater applications, Monte Carlo methods are employed to model the uncertainty on geological parameters. However, their brute-force application becomes computationally prohibitive for highly detailed geological descriptions, complex physical processes, and a large number of realizations. The Distance Kernel Method (DKM) overcomes this issue by clustering the realizations in a multidimensional space based on the flow responses obtained by means of an approximate (computationally cheaper) model; then, the uncertainty is estimated from the exact responses that are computed only for one representative realization per cluster (the medoid). Usually, DKM is employed to decrease the size of the sample of realizations that are considered to estimate the uncertainty. We propose to use the information from the approximate responses for uncertainty quantification. The subset of exact solutions provided by DKM is then employed to construct an error model and correct the potential bias of the approximate model. Two error models are devised that both employ the difference between approximate and exact medoid solutions, but differ in the way medoid errors are interpolated to correct the whole set of realizations. The Local Error Model rests upon the clustering defined by DKM and can be seen as a natural way to account for intra-cluster variability; the Global Error Model employs a linear interpolation of all medoid errors regardless of the cluster to which the single realization belongs. These error models are evaluated for an idealized pollution problem in which the uncertainty of the breakthrough curve needs to be estimated. For this numerical test case, we demonstrate that the error models improve the uncertainty quantification provided by the DKM algorithm and are effective in correcting the bias of the estimate computed solely from the MsFV results. The framework presented here is not specific to the methods considered and can be applied to other combinations of approximate models and techniques to select a subset of realizations
Resumo:
The impact of navigator spatial resolution and navigator evaluation time on image quality in free-breathing navigator-gated 3D coronary magnetic resonance angiography (MRA), including real-time motion correction, was investigated in a moving phantom. Objective image quality parameters signal-to-noise ratio (SNR) and vessel sharpness were compared. It was found that for improved mage quality a short navigator evaluation time is of crucial importance. Navigator spatial resolution showed minimal influence on image quality.
Resumo:
Modeling concentration-response function became extremely popular in ecotoxicology during the last decade. Indeed, modeling allows determining the total response pattern of a given substance. However, reliable modeling is consuming in term of data, which is in contradiction with the current trend in ecotoxicology, which aims to reduce, for cost and ethical reasons, the number of data produced during an experiment. It is therefore crucial to determine experimental design in a cost-effective manner. In this paper, we propose to use the theory of locally D-optimal designs to determine the set of concentrations to be tested so that the parameters of the concentration-response function can be estimated with high precision. We illustrated this approach by determining the locally D-optimal designs to estimate the toxicity of the herbicide dinoseb on daphnids and algae. The results show that the number of concentrations to be tested is often equal to the number of parameters and often related to the their meaning, i.e. they are located close to the parameters. Furthermore, the results show that the locally D-optimal design often has the minimal number of support points and is not much sensitive to small changes in nominal values of the parameters. In order to reduce the experimental cost and the use of test organisms, especially in case of long-term studies, reliable nominal values may therefore be fixed based on prior knowledge and literature research instead of on preliminary experiments
Resumo:
Microstructure imaging from diffusion magnetic resonance (MR) data represents an invaluable tool to study non-invasively the morphology of tissues and to provide a biological insight into their microstructural organization. In recent years, a variety of biophysical models have been proposed to associate particular patterns observed in the measured signal with specific microstructural properties of the neuronal tissue, such as axon diameter and fiber density. Despite very appealing results showing that the estimated microstructure indices agree very well with histological examinations, existing techniques require computationally very expensive non-linear procedures to fit the models to the data which, in practice, demand the use of powerful computer clusters for large-scale applications. In this work, we present a general framework for Accelerated Microstructure Imaging via Convex Optimization (AMICO) and show how to re-formulate this class of techniques as convenient linear systems which, then, can be efficiently solved using very fast algorithms. We demonstrate this linearization of the fitting problem for two specific models, i.e. ActiveAx and NODDI, providing a very attractive alternative for parameter estimation in those techniques; however, the AMICO framework is general and flexible enough to work also for the wider space of microstructure imaging methods. Results demonstrate that AMICO represents an effective means to accelerate the fit of existing techniques drastically (up to four orders of magnitude faster) while preserving accuracy and precision in the estimated model parameters (correlation above 0.9). We believe that the availability of such ultrafast algorithms will help to accelerate the spread of microstructure imaging to larger cohorts of patients and to study a wider spectrum of neurological disorders.
Resumo:
Abstract Traditionally, the common reserving methods used by the non-life actuaries are based on the assumption that future claims are going to behave in the same way as they did in the past. There are two main sources of variability in the processus of development of the claims: the variability of the speed with which the claims are settled and the variability between the severity of the claims from different accident years. High changes in these processes will generate distortions in the estimation of the claims reserves. The main objective of this thesis is to provide an indicator which firstly identifies and quantifies these two influences and secondly to determine which model is adequate for a specific situation. Two stochastic models were analysed and the predictive distributions of the future claims were obtained. The main advantage of the stochastic models is that they provide measures of variability of the reserves estimates. The first model (PDM) combines one conjugate family Dirichlet - Multinomial with the Poisson distribution. The second model (NBDM) improves the first one by combining two conjugate families Poisson -Gamma (for distribution of the ultimate amounts) and Dirichlet Multinomial (for distribution of the incremental claims payments). It was found that the second model allows to find the speed variability in the reporting process and development of the claims severity as function of two above mentioned distributions' parameters. These are the shape parameter of the Gamma distribution and the Dirichlet parameter. Depending on the relation between them we can decide on the adequacy of the claims reserve estimation method. The parameters have been estimated by the Methods of Moments and Maximum Likelihood. The results were tested using chosen simulation data and then using real data originating from the three lines of business: Property/Casualty, General Liability, and Accident Insurance. These data include different developments and specificities. The outcome of the thesis shows that when the Dirichlet parameter is greater than the shape parameter of the Gamma, resulting in a model with positive correlation between the past and future claims payments, suggests the Chain-Ladder method as appropriate for the claims reserve estimation. In terms of claims reserves, if the cumulated payments are high the positive correlation will imply high expectations for the future payments resulting in high claims reserves estimates. The negative correlation appears when the Dirichlet parameter is lower than the shape parameter of the Gamma, meaning low expected future payments for the same high observed cumulated payments. This corresponds to the situation when claims are reported rapidly and fewer claims remain expected subsequently. The extreme case appears in the situation when all claims are reported at the same time leading to expectations for the future payments of zero or equal to the aggregated amount of the ultimate paid claims. For this latter case, the Chain-Ladder is not recommended.
Resumo:
A Strontium ranelate appears to influence more than alendronate distal tibia bone microstructure as assessed by high-resolution peripheral quantitative computed tomography (HR-pQCT), and biomechanically relevant parameters as assessed by micro-finite element analysis (mu FEA), over 2 years, in postmenopausal osteoporotic women.Introduction Bone microstructure changes are a target in osteoporosis treatment to increase bone strength and reduce fracture risk.Methods Using HR-pQCT, we investigated the effects on distal tibia and radius microstructure of strontium ranelate (SrRan; 2 g/day) or alendronate (70 mg/week) for 2 years in postmenopausal osteoporotic women. This exploratory randomized, double-blind trial evaluated HR-pQCT and FEA parameters, areal bone mineral density (BMD), and bone turnover markers.Results In the intention-to-treat population (n = 83, age: 64 +/- 8 years; lumbar T-score: -2.8 +/- 0.8 [DXA]), distal tibia Cortical Thickness (CTh) and Density (DCort), and cancellous BV/TV increased by 6.3%, 1.4%, and 2.5%, respectively (all P < 0.005), with SrRan, but not with alendronate (0.9%, 0.4%, and 0.8%, NS) (P < 0.05 for all above between-group differences). Difference for CTh evaluated with a distance transformation method was close to significance (P = 0.06). The estimated failure load increased with SrRan (+2.1%, P < 0.005), not with alendronate (-0.6%, NS) (between-group difference, P < 0.01). Cortical stress was lower with SrRan (P < 0.05); both treatments decreased trabecular stress. At distal radius, there was no between-group difference other than DCort (P < 0.05). Bone turnover markers decreased with alendronate; bALP increased (+21%) and serum-CTX-I decreased (-1%) after 2 years of SrRan (between-group difference at each time point for both markers, P < 0.0001). Both treatments were well tolerated.Conclusions Within the constraints of HR-pQCT method, and while a possible artefactual contribution of strontium cannot be quantified, SrRan appeared to influence distal tibia bone microstructure and FEA-determined biomechanical parameters more than alendronate. However, the magnitude of the differences is unclear and requires confirmation with another method.
Resumo:
Background : This study aimed to use plantar pressure analysis in relatively long-distance walking for objective outcome evaluation of ankle osteoarthritis treatments, i.e., ankle arthrodesis and total ankle replacement.Methods : Forty-seven subjects in four groups: three patient groups and controls, participated in the study. Each subject walked twice in 50-m trials. Plantar pressure under the pathological foot was measured using pressure insoles. Six parameters: initial contact time, terminal contact time, maximum force time, peak pressure time, maximum force and peak pressure were calculated and averaged over trials in ten regions of foot. The parameters in each region were compared between patient groups and controls and their effect size was estimated. Besides, the correlations between pressure parameters and clinical scales were calculated.Findings : We observed based on temporal parameters that patients postpone the heel-off event, when high force in forefoot and high ankle moment happens. Also based on maximum force and peak pressure, the patients apply smoothened maximum forces on the affected foot. In ten regions, some parameters showed improvements after total ankle replacement, some showed alteration of foot function after ankle arthrodesis and some others showed still abnormality after both surgical treatments. These parameters showed also significant correlation with clinical scales in at least two regions of foot.Interpretation : Plantar pressure parameters in relatively long-distance trials showed to be strong tools for outcome evaluation of ankle osteoarthritis treatments. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
The ability to determine the location and relative strength of all transcription-factor binding sites in a genome is important both for a comprehensive understanding of gene regulation and for effective promoter engineering in biotechnological applications. Here we present a bioinformatically driven experimental method to accurately define the DNA-binding sequence specificity of transcription factors. A generalized profile was used as a predictive quantitative model for binding sites, and its parameters were estimated from in vitro-selected ligands using standard hidden Markov model training algorithms. Computer simulations showed that several thousand low- to medium-affinity sequences are required to generate a profile of desired accuracy. To produce data on this scale, we applied high-throughput genomics methods to the biochemical problem addressed here. A method combining systematic evolution of ligands by exponential enrichment (SELEX) and serial analysis of gene expression (SAGE) protocols was coupled to an automated quality-controlled sequence extraction procedure based on Phred quality scores. This allowed the sequencing of a database of more than 10,000 potential DNA ligands for the CTF/NFI transcription factor. The resulting binding-site model defines the sequence specificity of this protein with a high degree of accuracy not achieved earlier and thereby makes it possible to identify previously unknown regulatory sequences in genomic DNA. A covariance analysis of the selected sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism.
Resumo:
The slow-phase velocity of nystagmus is one of the most sensitive parameters of vestibular function and is currently the standard for evaluating the caloric test. However, the assessment of this parameter requires recording the response by using nystagmography. The aim of this study was to evaluate whether frequency and duration of the caloric nystagmus, as measured by using a clinical test with Frenzel glasses, could predict the result of the recorded test. The retrospective analysis of 222 caloric test results recorded by means of electronystagmography has shown a good association between the 3 parameters for unilateral weakness. The asymmetry observed in the velocity can be predicted by a combination of frequency and duration. On the other hand, no relationship was observed between the parameters for directional preponderance. These results indicate that a clinical caloric test with frequency and duration as parameters can be used to predict the unilateral weakness, which would be obtained by use of nystagmography. We propose an evaluation of the caloric test on the basis of diagrams combining the 3 response parameters.