929 resultados para Asymptotic behaviour, Bayesian methods, Mixture models, Overfitting, Posterior concentration
Resumo:
Die vorliegende Arbeit möchte die Anwendbarkeit ladungsstabilisierter kolloidaler Systeme als Modellsysteme für fundamentale Fragen der Festkörperphysik und Thermodynamik auf binäre Mischungen erweitern. In diesem Kontext untersucht sie das Phasenverhalten und mit ihm im Zusammenhang stehende Eigenschaften von binären Mischungen ladungsstabilisierter, sphärischer kolloidaler Partikel in wässriger Suspension. Da das Verhalten hartkugelähnlicher Systeme durch hohe Fremdionenkonzentrationen bereits gut bekannt ist, konzentriert sich diese Arbeit auf sehr langreichweitig repulsive Systeme unter deionisierten Bedingungen. Neben etablierten Methoden der Mikroskopie und statischer Lichtstreuung zur Phasendiagrammsbestimmung wird auch die Beobachtung der zeitabhängigen Entwicklung des Schermoduls verwendet, um eine langsame Erstarrungskinetik zu studieren. Es werden insbesondere Mischungen aus Komponenten unterschiedlicher Größe und Ladung der Größenverhältnisse 0,9, 0,82, 0,57, 0,39 und 0,37 untersucht. Diese zeigen in dieser Reihenfolge Phasendiagramme mit spindelförmigem fluid/kristallinen Koexistenzbereich wie auch azeotrope und eutektische Phasendiagramme. Die Strukturuntersuchungen aus der statischen Lichtstreuung stehen in praktisch allen Fällen im Einklang mit ungeordneten bcc- Substitutionskristallen, was über Modelle zu Schermodulmessungen bestätigt wird. Für das spindelförmige System wird ein überraschend weiter Koexistenzbereich beobachtet, wie er nicht von der Theorie erwartet wird. Die Lage, aber nicht die Form des Solidus stimmt quantitativ mit Simulationsvorhersagen zu einkomponentigen Systemen überein. Für das eutektische System bei einem Radienverhältnis von 0,57 wird der Einfluss der Schwerkraft auf das Phasenverhalten und die Erstarrungskinetik untersucht. Die der Kristallisation der kleineren Majoritätskomponente vorgelagerte gravitativ unterstützte Entmischung begünstigt hier die Verfestigung. Beobachtet werden Morphologien, die aus anderen Systemen bekannt sind (Facetten, Dendriten), wie auch erstmals eine kollumnare eutektische Morphologie. Aus den Ergebnissen wird der erste umfassende Überblick über das Phasenverhalten deionisierter Mischungen ladungsstabilisierter, sphärischer Partikel erstellt, die eine Diskussion der Daten anderer Autoren und unserer Gruppe über fluid-fluider Phasenseparation und einem System mit oberem azeotropen Punkt mit einschließt. Die meisten metallspezifischen Phasendiagrammtypen können mit ladungsstabilisierten kolloidalen Partikeln reproduziert werden. Die langreichweitig wechselwirkenden Partikel zeigt eine wesentlich verbesserte substitutionelle Mischbarkeit im Vergleich mit Hartkugel- und Metallsystemen. Das Größenverhältnis der sphärischen Partikel nimmt dabei die bestimmende Rolle für den Phasendiagrammtyp ein.
Resumo:
Die Bor-Neuroneneinfang-Therapie (engl.: Boron Neutron Capture Therapy, BNCT) ist eine indirekte Strahlentherapie, welche durch die gezielte Freisetzung von dicht ionisierender Strahlung Tumorzellen zerstört. Die freigesetzten Ionen sind Spaltfragmente einer Kernreaktion, bei welcher das Isotop 10B ein niederenergetisches (thermisches) Neutron einfängt. Das 10B wird durch ein spezielles Borpräparat in den Tumorzellen angereichert, welches selbst nicht radioaktiv ist. rnAn der Johannes Gutenberg-Universität Mainz wurde die Forschung für die Anwendung eines klinischen Behandlungsprotokolls durch zwei Heilversuche bei Patienten mit kolorektalen Lebermetastasen an der Universität Pavia, Italien, angeregt, bei denen die Leber außerhalb des Körpers in einem Forschungsreaktor bestrahlt wurde. Als erster Schritt wurde in Kooperation verschiedener universitärer Institute eine klinische Studie zur Bestimmung klinisch relevanter Parameter wie der Borverteilung in verschiedenen Geweben und dem pharmakokinetischen Aufnahmeverhalten des Borpräparates initiiert.rnDie Borkonzentration in den Gewebeproben wurde hinsichtlich ihrer räumlichen Verteilung in verschiedenen Zellarealen bestimmt, um mehr über das Aufnahmeverhalten der Zellen für das BPA im Hinblick auf ihre biologischen Charakteristika zu erfahren. Die Borbestimung wurde per Quantitative Neutron Capture Radiography, Prompt Gamma Activation Analysis und Inductively Coupled Plasma Mass Spectroscopy parallel zur histologischen Analyse des Gewebes durchgeführt. Es war möglich zu zeigen, dass in Proben aus Tumorgewebe und aus tumorfreiem Gewebe mit unterschiedlichen morphologischen Eigenschaften eine sehr heterogene Borverteilung vorliegt. Die Ergebnisse der Blutproben werden für die Erstellung eines pharmakokinetischen Modells verwendet und sind in Übereinstimmung mit existierenden pharmakokinetische Modellen. Zusätzlich wurden die Methoden zur Borbestimmung über speziell hergestellte Referenzstandards untereinander verglichen. Dabei wurde eine gute Übereinstimmung der Ergebnisse festgestellt, ferner wurde für alle biologischen Proben Standardanalyseprotokolle erstellt.rnDie bisher erhaltenen Ergebnisse der klinischen Studie sind vielversprechend, lassen aber noch keine endgültigen Schlussfolgerungen hinsichtlich der Wirksamkeit von BNCT für maligne Lebererkrankungen zu. rn
Resumo:
The physico-chemical characterization, structure-pharmacokinetic and metabolism studies of new semi synthetic analogues of natural bile acids (BAs) drug candidates have been performed. Recent studies discovered a role of BAs as agonists of FXR and TGR5 receptor, thus opening new therapeutic target for the treatment of liver diseases or metabolic disorders. Up to twenty new semisynthetic analogues have been synthesized and studied in order to find promising novel drugs candidates. In order to define the BAs structure-activity relationship, their main physico-chemical properties (solubility, detergency, lipophilicity and affinity with serum albumin) have been measured with validated analytical methodologies. Their metabolism and biodistribution has been studied in “bile fistula rat”, model where each BA is acutely administered through duodenal and femoral infusion and bile collected at different time interval allowing to define the relationship between structure and intestinal absorption and hepatic uptake ,metabolism and systemic spill-over. One of the studied analogues, 6α-ethyl-3α7α-dihydroxy-5β-cholanic acid, analogue of CDCA (INT 747, Obeticholic Acid (OCA)), recently under approval for the treatment of cholestatic liver diseases, requires additional studies to ensure its safety and lack of toxicity when administered to patients with a strong liver impairment. For this purpose, CCl4 inhalation to rat causing hepatic decompensation (cirrhosis) animal model has been developed and used to define the difference of OCA biodistribution in respect to control animals trying to define whether peripheral tissues might be also exposed as a result of toxic plasma levels of OCA, evaluating also the endogenous BAs biodistribution. An accurate and sensitive HPLC-ES-MS/MS method is developed to identify and quantify all BAs in biological matrices (bile, plasma, urine, liver, kidney, intestinal content and tissue) for which a sample pretreatment have been optimized.
Resumo:
In den vergangenen Jahren wurden einige bislang unbekannte Phänomene experimentell beobachtet, wie etwa die Existenz unterschiedlicher Prä-Nukleations-Strukturen. Diese haben zu einem neuen Verständnis von Prozessen, die auf molekularer Ebene während der Nukleation und dem Wachstum von Kristallen auftreten, beigetragen. Die Auswirkungen solcher Prä-Nukleations-Strukturen auf den Prozess der Biomineralisation sind noch nicht hinreichend verstanden. Die Mechanismen, mittels derer biomolekulare Modifikatoren, wie Peptide, mit Prä-Nukleations-Strukturen interagieren und somit den Nukleationsprozess von Mineralen beeinflussen könnten, sind vielfältig. Molekulare Simulationen sind zur Analyse der Formation von Prä-Nukleations-Strukturen in Anwesenheit von Modifikatoren gut geeignet. Die vorliegende Arbeit beschreibt einen Ansatz zur Analyse der Interaktion von Peptiden mit den in Lösung befindlichen Bestandteilen der entstehenden Kristalle mit Hilfe von Molekular-Dynamik Simulationen.rnUm informative Simulationen zu ermöglichen, wurde in einem ersten Schritt die Qualität bestehender Kraftfelder im Hinblick auf die Beschreibung von mit Calciumionen interagierenden Oligoglutamaten in wässrigen Lösungen untersucht. Es zeigte sich, dass große Unstimmigkeiten zwischen etablierten Kraftfeldern bestehen, und dass keines der untersuchten Kraftfelder eine realistische Beschreibung der Ionen-Paarung dieser komplexen Ionen widerspiegelte. Daher wurde eine Strategie zur Optimierung bestehender biomolekularer Kraftfelder in dieser Hinsicht entwickelt. Relativ geringe Veränderungen der auf die Ionen–Peptid van-der-Waals-Wechselwirkungen bezogenen Parameter reichten aus, um ein verlässliches Modell für das untersuchte System zu erzielen. rnDas umfassende Sampling des Phasenraumes der Systeme stellt aufgrund der zahlreichen Freiheitsgrade und der starken Interaktionen zwischen Calciumionen und Glutamat in Lösung eine besondere Herausforderung dar. Daher wurde die Methode der Biasing Potential Replica Exchange Molekular-Dynamik Simulationen im Hinblick auf das Sampling von Oligoglutamaten justiert und es erfolgte die Simulation von Peptiden verschiedener Kettenlängen in Anwesenheit von Calciumionen. Mit Hilfe der Sketch-Map Analyse konnten im Rahmen der Simulationen zahlreiche stabile Ionen-Peptid-Komplexe identifiziert werden, welche die Formation von Prä-Nukleations-Strukturen beeinflussen könnten. Abhängig von der Kettenlänge des Peptids weisen diese Komplexe charakteristische Abstände zwischen den Calciumionen auf. Diese ähneln einigen Abständen zwischen den Calciumionen in jenen Phasen von Calcium-Oxalat Kristallen, die in Anwesenheit von Oligoglutamaten gewachsen sind. Die Analogie der Abstände zwischen Calciumionen in gelösten Ionen-Peptid-Komplexen und in Calcium-Oxalat Kristallen könnte auf die Bedeutung von Ionen-Peptid-Komplexen im Prozess der Nukleation und des Wachstums von Biomineralen hindeuten und stellt einen möglichen Erklärungsansatz für die Fähigkeit von Oligoglutamaten zur Beeinflussung der Phase des sich formierenden Kristalls dar, die experimentell beobachtet wurde.
Resumo:
Background The estimation of demographic parameters from genetic data often requires the computation of likelihoods. However, the likelihood function is computationally intractable for many realistic evolutionary models, and the use of Bayesian inference has therefore been limited to very simple models. The situation changed recently with the advent of Approximate Bayesian Computation (ABC) algorithms allowing one to obtain parameter posterior distributions based on simulations not requiring likelihood computations. Results Here we present ABCtoolbox, a series of open source programs to perform Approximate Bayesian Computations (ABC). It implements various ABC algorithms including rejection sampling, MCMC without likelihood, a Particle-based sampler and ABC-GLM. ABCtoolbox is bundled with, but not limited to, a program that allows parameter inference in a population genetics context and the simultaneous use of different types of markers with different ploidy levels. In addition, ABCtoolbox can also interact with most simulation and summary statistics computation programs. The usability of the ABCtoolbox is demonstrated by inferring the evolutionary history of two evolutionary lineages of Microtus arvalis. Using nuclear microsatellites and mitochondrial sequence data in the same estimation procedure enabled us to infer sex-specific population sizes and migration rates and to find that males show smaller population sizes but much higher levels of migration than females. Conclusion ABCtoolbox allows a user to perform all the necessary steps of a full ABC analysis, from parameter sampling from prior distributions, data simulations, computation of summary statistics, estimation of posterior distributions, model choice, validation of the estimation procedure, and visualization of the results.
Resumo:
Complete basis set and Gaussian-n methods were combined with Barone and Cossi's implementation of the polarizable conductor model (CPCM) continuum solvation methods to calculate pKa values for six carboxylic acids. Four different thermodynamic cycles were considered in this work. An experimental value of −264.61 kcal/mol for the free energy of solvation of H+, ΔGs(H+), was combined with a value for Ggas(H+) of −6.28 kcal/mol, to calculate pKa values with cycle 1. The complete basis set gas-phase methods used to calculate gas-phase free energies are very accurate, with mean unsigned errors of 0.3 kcal/mol and standard deviations of 0.4 kcal/mol. The CPCM solvation calculations used to calculate condensed-phase free energies are slightly less accurate than the gas-phase models, and the best method has a mean unsigned error and standard deviation of 0.4 and 0.5 kcal/mol, respectively. Thermodynamic cycles that include an explicit water in the cycle are not accurate when the free energy of solvation of a water molecule is used, but appear to become accurate when the experimental free energy of vaporization of water is used. This apparent improvement is an artifact of the standard state used in the calculation. Geometry relaxation in solution does not improve the results when using these later cycles. The use of cycle 1 and the complete basis set models combined with the CPCM solvation methods yielded pKa values accurate to less than half a pKa unit. © 2001 John Wiley & Sons, Inc. Int J Quantum Chem, 2001
Resumo:
Complete Basis Set and Gaussian-n methods were combined with CPCM continuum solvation methods to calculate pKa values for six carboxylic acids. An experimental value of −264.61 kcal/mol for the free energy of solvation of H+, ΔGs(H+), was combined with a value for Ggas(H+) of −6.28 kcal/mol to calculate pKa values with Cycle 1. The Complete Basis Set gas-phase methods used to calculate gas-phase free energies are very accurate, with mean unsigned errors of 0.3 kcal/mol and standard deviations of 0.4 kcal/mol. The CPCM solvation calculations used to calculate condensed-phase free energies are slightly less accurate than the gas-phase models, and the best method has a mean unsigned error and standard deviation of 0.4 and 0.5 kcal/mol, respectively. The use of Cycle 1 and the Complete Basis Set models combined with the CPCM solvation methods yielded pKa values accurate to less than half a pKa unit.
Resumo:
The complete basis set methods CBS-4, CBS-QB3, and CBS-APNO, and the Gaussian methods G2 and G3 were used to calculate the gas phase energy differences between six different carboxylic acids and their respective anions. Two different continuum methods, SM5.42R and CPCM, were used to calculate the free energy differences of solvation for the acids and their anions. Relative pKa values were calculated for each acid using one of the acids as a reference point. The CBS-QB3 and CBS-APNO gas phase calculations, combined with the CPCM/HF/6-31+G(d)//HF/6-31G(d) or CPCM/HF/6-31+G(d)//HF/6-31+G(d) continuum solvation calculations on the lowest energy gas phase conformer, and with the conformationally averaged values, give results accurate to ½ pKa unit. © 2001 American Institute of Physics.
Resumo:
OBJECTIVES: To test the survival rates, and the technical and biological complication rates of customized zirconia and titanium abutments 5 years after crown insertion. MATERIAL AND METHODS: Twenty-two patients with 40 single implants in maxillary and mandibular canine and posterior regions were included. The implant sites were randomly assigned to zirconia abutments supporting all-ceramic crowns or titanium abutments supporting metal-ceramic crowns. Clinical examinations were performed at baseline, and at 6, 12, 36 and 60 months of follow-up. The abutments and reconstructions were examined for technical and/or biological complications. Probing pocket depth (PPD), plaque control record (PCR) and Bleeding on Probing (BOP) were assessed at abutments (test) and analogous contralateral teeth (control). Radiographs of the implants revealed the bone level (BL) on mesial (mBL) and distal sides (dBL). Data were statistically analyzed with nonparametric mixed models provided by Brunner and Langer and STATA (P < 0.05). RESULTS: Eighteen patients with 18 zirconia and 10 titanium abutments were available at a mean follow-up of 5.6 years (range 4.5-6.3 years). No abutment fracture or loss of a reconstruction occurred. Hence, the survival rate was 100% for both. Survival of implants supporting zirconia abutments was 88.9% and 90% for implants supporting titanium abutments. Chipping of the veneering ceramic occurred at three metal-ceramic crowns supported by titanium abutments. No significant differences were found at the zirconia and titanium abutments for PPD (meanPPD(ZrO2) 3.3 ± 0.6 mm, mPPD(T) (i) 3.6 ± 1.1 mm), PCR (mPCR(Z) (rO) (2) 0.1 ± 0.3, mPCR(T) (i) 0.3 ± 0.2) and BOP (mBOP(Z) (rO) (2) 0.5 ± 0.3, mBOP(T) (i) 0.6 ± 0.3). Moreover, the BL was similar at implants supporting zirconia and titanium abutments (mBL(Z) (rO) (2) 1.8 ± 0.5, dBL(Z) (rO) (2) 2.0 ± 0.8; mBL(T) (i) 2.0 ± 0.8, dBL(T) (i) 1.9 ± 0.8). CONCLUSIONS: There were no statistically or clinically relevant differences between the 5-year survival rates, and the technical and biological complication rates of zirconia and titanium abutments in posterior regions.
Resumo:
In many applications the observed data can be viewed as a censored high dimensional full data random variable X. By the curve of dimensionality it is typically not possible to construct estimators that are asymptotically efficient at every probability distribution in a semiparametric censored data model of such a high dimensional censored data structure. We provide a general method for construction of one-step estimators that are efficient at a chosen submodel of the full-data model, are still well behaved off this submodel and can be chosen to always improve on a given initial estimator. These one-step estimators rely on good estimators of the censoring mechanism and thus will require a parametric or semiparametric model for the censoring mechanism. We present a general theorem that provides a template for proving the desired asymptotic results. We illustrate the general one-step estimation methods by constructing locally efficient one-step estimators of marginal distributions and regression parameters with right-censored data, current status data and bivariate right-censored data, in all models allowing the presence of time-dependent covariates. The conditions of the asymptotics theorem are rigorously verified in one of the examples and the key condition of the general theorem is verified for all examples.
Resumo:
In epidemiological work, outcomes are frequently non-normal, sample sizes may be large, and effects are often small. To relate health outcomes to geographic risk factors, fast and powerful methods for fitting spatial models, particularly for non-normal data, are required. We focus on binary outcomes, with the risk surface a smooth function of space. We compare penalized likelihood models, including the penalized quasi-likelihood (PQL) approach, and Bayesian models based on fit, speed, and ease of implementation. A Bayesian model using a spectral basis representation of the spatial surface provides the best tradeoff of sensitivity and specificity in simulations, detecting real spatial features while limiting overfitting and being more efficient computationally than other Bayesian approaches. One of the contributions of this work is further development of this underused representation. The spectral basis model outperforms the penalized likelihood methods, which are prone to overfitting, but is slower to fit and not as easily implemented. Conclusions based on a real dataset of cancer cases in Taiwan are similar albeit less conclusive with respect to comparing the approaches. The success of the spectral basis with binary data and similar results with count data suggest that it may be generally useful in spatial models and more complicated hierarchical models.
Resumo:
Traffic particle concentrations show considerable spatial variability within a metropolitan area. We consider latent variable semiparametric regression models for modeling the spatial and temporal variability of black carbon and elemental carbon concentrations in the greater Boston area. Measurements of these pollutants, which are markers of traffic particles, were obtained from several individual exposure studies conducted at specific household locations as well as 15 ambient monitoring sites in the city. The models allow for both flexible, nonlinear effects of covariates and for unexplained spatial and temporal variability in exposure. In addition, the different individual exposure studies recorded different surrogates of traffic particles, with some recording only outdoor concentrations of black or elemental carbon, some recording indoor concentrations of black carbon, and others recording both indoor and outdoor concentrations of black carbon. A joint model for outdoor and indoor exposure that specifies a spatially varying latent variable provides greater spatial coverage in the area of interest. We propose a penalised spline formation of the model that relates to generalised kringing of the latent traffic pollution variable and leads to a natural Bayesian Markov Chain Monte Carlo algorithm for model fitting. We propose methods that allow us to control the degress of freedom of the smoother in a Bayesian framework. Finally, we present results from an analysis that applies the model to data from summer and winter separately
Resumo:
A number of authors have studies the mixture survival model to analyze survival data with nonnegligible cure fractions. A key assumption made by these authors is the independence between the survival time and the censoring time. To our knowledge, no one has studies the mixture cure model in the presence of dependent censoring. To account for such dependence, we propose a more general cure model which allows for dependent censoring. In particular, we derive the cure models from the perspective of competing risks and model the dependence between the censoring time and the survival time using a class of Archimedean copula models. Within this framework, we consider the parameter estimation, the cure detection, and the two-sample comparison of latency distribution in the presence of dependent censoring when a proportion of patients is deemed cured. Large sample results using the martingale theory are obtained. We applied the proposed methodologies to the SEER prostate cancer data.
Resumo:
Medical errors originating in health care facilities are a significant source of preventable morbidity, mortality, and healthcare costs. Voluntary error report systems that collect information on the causes and contributing factors of medi- cal errors regardless of the resulting harm may be useful for developing effective harm prevention strategies. Some patient safety experts question the utility of data from errors that did not lead to harm to the patient, also called near misses. A near miss (a.k.a. close call) is an unplanned event that did not result in injury to the patient. Only a fortunate break in the chain of events prevented injury. We use data from a large voluntary reporting system of 836,174 medication errors from 1999 to 2005 to provide evidence that the causes and contributing factors of errors that result in harm are similar to the causes and contributing factors of near misses. We develop Bayesian hierarchical models for estimating the log odds of selecting a given cause (or contributing factor) of error given harm has occurred and the log odds of selecting the same cause given that harm did not occur. The posterior distribution of the correlation between these two vectors of log-odds is used as a measure of the evidence supporting the use of data from near misses and their causes and contributing factors to prevent medical errors. In addition, we identify the causes and contributing factors that have the highest or lowest log-odds ratio of harm versus no harm. These causes and contributing factors should also be a focus in the design of prevention strategies. This paper provides important evidence on the utility of data from near misses, which constitute the vast majority of errors in our data.