981 resultados para cosmological parameters from CMBR
Resumo:
Ziel dieser Dissertation ist die experimentelle Charakterisierung und quantitative Beschreibung der Hybridisierung von komplementären Nukleinsäuresträngen mit oberflächengebundenen Fängermolekülen für die Entwicklung von integrierten Biosensoren. Im Gegensatz zu lösungsbasierten Verfahren ist mit Microarray Substraten die Untersuchung vieler Nukleinsäurekombinationen parallel möglich. Als biologisch relevantes Evaluierungssystem wurde das in Eukaryoten universell exprimierte Actin Gen aus unterschiedlichen Pflanzenspezies verwendet. Dieses Testsystem ermöglicht es, nahe verwandte Pflanzenarten auf Grund von geringen Unterschieden in der Gen-Sequenz (SNPs) zu charakterisieren. Aufbauend auf dieses gut studierte Modell eines House-Keeping Genes wurde ein umfassendes Microarray System, bestehend aus kurzen und langen Oligonukleotiden (mit eingebauten LNA-Molekülen), cDNAs sowie DNA und RNA Targets realisiert. Damit konnte ein für online Messung optimiertes Testsystem mit hohen Signalstärken entwickelt werden. Basierend auf den Ergebnissen wurde der gesamte Signalpfad von Nukleinsärekonzentration bis zum digitalen Wert modelliert. Die aus der Entwicklung und den Experimenten gewonnen Erkenntnisse über die Kinetik und Thermodynamik von Hybridisierung sind in drei Publikationen zusammengefasst die das Rückgrat dieser Dissertation bilden. Die erste Publikation beschreibt die Verbesserung der Reproduzierbarkeit und Spezifizität von Microarray Ergebnissen durch online Messung von Kinetik und Thermodynamik gegenüber endpunktbasierten Messungen mit Standard Microarrays. Für die Auswertung der riesigen Datenmengen wurden zwei Algorithmen entwickelt, eine reaktionskinetische Modellierung der Isothermen und ein auf der Fermi-Dirac Statistik beruhende Beschreibung des Schmelzüberganges. Diese Algorithmen werden in der zweiten Publikation beschrieben. Durch die Realisierung von gleichen Sequenzen in den chemisch unterschiedlichen Nukleinsäuren (DNA, RNA und LNA) ist es möglich, definierte Unterschiede in der Konformation des Riboserings und der C5-Methylgruppe der Pyrimidine zu untersuchen. Die kompetitive Wechselwirkung dieser unterschiedlichen Nukleinsäuren gleicher Sequenz und die Auswirkungen auf Kinetik und Thermodynamik ist das Thema der dritten Publikation. Neben der molekularbiologischen und technologischen Entwicklung im Bereich der Sensorik von Hybridisierungsreaktionen oberflächengebundener Nukleinsäuremolekülen, der automatisierten Auswertung und Modellierung der anfallenden Datenmengen und der damit verbundenen besseren quantitativen Beschreibung von Kinetik und Thermodynamik dieser Reaktionen tragen die Ergebnisse zum besseren Verständnis der physikalisch-chemischen Struktur des elementarsten biologischen Moleküls und seiner nach wie vor nicht vollständig verstandenen Spezifizität bei.
Resumo:
Waste management represents an important issue in our society and Waste-to-Energy incineration plants have been playing a significant role in the last decades, showing an increased importance in Europe. One of the main issues posed by waste combustion is the generation of air contaminants. Particular concern is present about acid gases, mainly hydrogen chloride and sulfur oxides, due to their potential impact on the environment and on human health. Therefore, in the present study the main available technological options for flue gas treatment were analyzed, focusing on dry treatment systems, which are increasingly applied in Municipal Solid Wastes (MSW) incinerators. An operational model was proposed to describe and optimize acid gas removal process. It was applied to an existing MSW incineration plant, where acid gases are neutralized in a two-stage dry treatment system. This process is based on the injection of powdered calcium hydroxide and sodium bicarbonate in reactors followed by fabric filters. HCl and SO2 conversions were expressed as a function of reactants flow rates, calculating model parameters from literature and plant data. The implementation in a software for process simulation allowed the identification of optimal operating conditions, taking into account the reactant feed rates, the amount of solid products and the recycle of the sorbent. Alternative configurations of the reference plant were also assessed. The applicability of the operational model was extended developing also a fundamental approach to the issue. A predictive model was developed, describing mass transfer and kinetic phenomena governing the acid gas neutralization with solid sorbents. The rate controlling steps were identified through the reproduction of literature data, allowing the description of acid gas removal in the case study analyzed. A laboratory device was also designed and started up to assess the required model parameters.
Resumo:
I present a new experimental method called Total Internal Reflection Fluorescence Cross-Correlation Spectroscopy (TIR-FCCS). It is a method that can probe hydrodynamic flows near solid surfaces, on length scales of tens of nanometres. Fluorescent tracers flowing with the liquid are excited by evanescent light, produced by epi-illumination through the periphery of a high NA oil-immersion objective. Due to the fast decay of the evanescent wave, fluorescence only occurs for tracers in the ~100 nm proximity of the surface, thus resulting in very high normal resolution. The time-resolved fluorescence intensity signals from two laterally shifted (in flow direction) observation volumes, created by two confocal pinholes are independently measured and recorded. The cross-correlation of these signals provides important information for the tracers’ motion and thus their flow velocity. Due to the high sensitivity of the method, fluorescent species with different size, down to single dye molecules can be used as tracers. The aim of my work was to build an experimental setup for TIR-FCCS and use it to experimentally measure the shear rate and slip length of water flowing on hydrophilic and hydrophobic surfaces. However, in order to extract these parameters from the measured correlation curves a quantitative data analysis is needed. This is not straightforward task due to the complexity of the problem, which makes the derivation of analytical expressions for the correlation functions needed to fit the experimental data, impossible. Therefore in order to process and interpret the experimental results I also describe a new numerical method of data analysis of the acquired auto- and cross-correlation curves – Brownian Dynamics techniques are used to produce simulated auto- and cross-correlation functions and to fit the corresponding experimental data. I show how to combine detailed and fairly realistic theoretical modelling of the phenomena with accurate measurements of the correlation functions, in order to establish a fully quantitative method to retrieve the flow properties from the experiments. An importance-sampling Monte Carlo procedure is employed in order to fit the experiments. This provides the optimum parameter values together with their statistical error bars. The approach is well suited for both modern desktop PC machines and massively parallel computers. The latter allows making the data analysis within short computing times. I applied this method to study flow of aqueous electrolyte solution near smooth hydrophilic and hydrophobic surfaces. Generally on hydrophilic surface slip is not expected, while on hydrophobic surface some slippage may exists. Our results show that on both hydrophilic and moderately hydrophobic (contact angle ~85°) surfaces the slip length is ~10-15nm or lower, and within the limitations of the experiments and the model, indistinguishable from zero.
Resumo:
The upgrade of the Mainz Mikrotron (MAMI) electron accelerator facility in 2007 which raised the beam energy up to 1.5,GeV, gives the opportunity to study strangeness production channels through electromagnetic process. The Kaon Spectrometer (KAOS) managed by the A1 Collaboration, enables the efficient detection of the kaons associated with strangeness electroproduction. Used as a single arm spectrometer, it can be combined with the existing high-resolution spectrometers for exclusive measurements in the kinematic domain accessible to them.rnrnFor studying hypernuclear production in the ^A Z(e,e'K^+) _Lambda ^A(Z-1) reaction, the detection of electrons at very forward angles is needed. Therefore, the use of KAOS as a double-arm spectrometer for detection of kaons and the electrons at the same time is mandatory. Thus, the electron arm should be provided with a new detector package, with high counting rate capability and high granularity for a good spatial resolution. To this end, a new state-of-the-art scintillating fiber hodoscope has been developed as an electron detector.rnrnThe hodoscope is made of two planes with a total of 18432 scintillating double-clad fibers of 0.83 mm diameter. Each plane is formed by 72 modules. Each module is formed from a 60deg slanted multi-layer bundle, where 4 fibers of a tilted column are connected to a common read out. The read-out is made with 32 channels of linear array multianode photomultipliers. Signal processing makes use of newly developed double-threshold discriminators. The discriminated signal is sent in parallel to dead-time free time-to-digital modules and to logic modules for triggering purposes.rnrnTwo fiber modules were tested with a carbon beam at GSI, showing a time resolution of 220 ps (FWHM) and a position residual of 270 microm m (FWHM) with a detection efficiency epsilon>99%.rnrnThe characterization of the spectrometer arm has been achieved through simulations calculating the transfer matrix of track parameters from the fiber detector focal plane to the primary vertex. This transfer matrix has been calculated to first order using beam transport optics and has been checked by quasielastic scattering off a carbon target, where the full kinematics is determined by measuring the recoil proton momentum. The reconstruction accuracy for the emission parameters at the quasielastic vertex was found to be on the order of 0.3 % in first test realized.rnrnThe design, construction process, commissioning, testing and characterization of the fiber hodoscope are presented in this work which has been developed at the Institut für Kernphysik of the Johannes Gutenberg - Universität Mainz.
Resumo:
Coarse graining is a popular technique used in physics to speed up the computer simulation of molecular fluids. An essential part of this technique is a method that solves the inverse problem of determining the interaction potential or its parameters from the given structural data. Due to discrepancies between model and reality, the potential is not unique, such that stability of such method and its convergence to a meaningful solution are issues.rnrnIn this work, we investigate empirically whether coarse graining can be improved by applying the theory of inverse problems from applied mathematics. In particular, we use the singular value analysis to reveal the weak interaction parameters, that have a negligible influence on the structure of the fluid and which cause non-uniqueness of the solution. Further, we apply a regularizing Levenberg-Marquardt method, which is stable against the mentioned discrepancies. Then, we compare it to the existing physical methods - the Iterative Boltzmann Inversion and the Inverse Monte Carlo method, which are fast and well adapted to the problem, but sometimes have convergence problems.rnrnFrom analysis of the Iterative Boltzmann Inversion, we elaborate a meaningful approximation of the structure and use it to derive a modification of the Levenberg-Marquardt method. We engage the latter for reconstruction of the interaction parameters from experimental data for liquid argon and nitrogen. We show that the modified method is stable, convergent and fast. Further, the singular value analysis of the structure and its approximation allows to determine the crucial interaction parameters, that is, to simplify the modeling of interactions. Therefore, our results build a rigorous bridge between the inverse problem from physics and the powerful solution tools from mathematics. rn
Resumo:
Seventeen bones (sixteen cadaveric bones and one plastic bone) were used to validate a method for reconstructing a surface model of the proximal femur from 2D X-ray radiographs and a statistical shape model that was constructed from thirty training surface models. Unlike previously introduced validation studies, where surface-based distance errors were used to evaluate the reconstruction accuracy, here we propose to use errors measured based on clinically relevant morphometric parameters. For this purpose, a program was developed to robustly extract those morphometric parameters from the thirty training surface models (training population), from the seventeen surface models reconstructed from X-ray radiographs, and from the seventeen ground truth surface models obtained either by a CT-scan reconstruction method or by a laser-scan reconstruction method. A statistical analysis was then performed to classify the seventeen test bones into two categories: normal cases and outliers. This classification step depends on the measured parameters of the particular test bone. In case all parameters of a test bone were covered by the training population's parameter ranges, this bone is classified as normal bone, otherwise as outlier bone. Our experimental results showed that statistically there was no significant difference between the morphometric parameters extracted from the reconstructed surface models of the normal cases and those extracted from the reconstructed surface models of the outliers. Therefore, our statistical shape model based reconstruction technique can be used to reconstruct not only the surface model of a normal bone but also that of an outlier bone.
Resumo:
Background The estimation of demographic parameters from genetic data often requires the computation of likelihoods. However, the likelihood function is computationally intractable for many realistic evolutionary models, and the use of Bayesian inference has therefore been limited to very simple models. The situation changed recently with the advent of Approximate Bayesian Computation (ABC) algorithms allowing one to obtain parameter posterior distributions based on simulations not requiring likelihood computations. Results Here we present ABCtoolbox, a series of open source programs to perform Approximate Bayesian Computations (ABC). It implements various ABC algorithms including rejection sampling, MCMC without likelihood, a Particle-based sampler and ABC-GLM. ABCtoolbox is bundled with, but not limited to, a program that allows parameter inference in a population genetics context and the simultaneous use of different types of markers with different ploidy levels. In addition, ABCtoolbox can also interact with most simulation and summary statistics computation programs. The usability of the ABCtoolbox is demonstrated by inferring the evolutionary history of two evolutionary lineages of Microtus arvalis. Using nuclear microsatellites and mitochondrial sequence data in the same estimation procedure enabled us to infer sex-specific population sizes and migration rates and to find that males show smaller population sizes but much higher levels of migration than females. Conclusion ABCtoolbox allows a user to perform all the necessary steps of a full ABC analysis, from parameter sampling from prior distributions, data simulations, computation of summary statistics, estimation of posterior distributions, model choice, validation of the estimation procedure, and visualization of the results.
Resumo:
In this study, the effect of time derivatives of flow rate and rotational speed was investigated on the mathematical modeling of a rotary blood pump (RBP). The basic model estimates the pressure head of the pump as a dependent variable using measured flow and speed as predictive variables. Performance of the model was evaluated by adding time derivative terms for flow and speed. First, to create a realistic working condition, the Levitronix CentriMag RBP was implanted in a sheep. All parameters from the model were physically measured and digitally acquired over a wide range of conditions, including pulsatile speed. Second, a statistical analysis of the different variables (flow, speed, and their time derivatives) based on multiple regression analysis was performed to determine the significant variables for pressure head estimation. Finally, different mathematical models were used to show the effect of time derivative terms on the performance of the models. In order to evaluate how well the estimated pressure head using different models fits the measured pressure head, root mean square error and correlation coefficient were used. The results indicate that inclusion of time derivatives of flow and speed can improve model accuracy, but only minimally.
Resumo:
Shear-wave splitting can be a useful technique for determining crustal stress fields in volcanic settings and temporal variations associated with activity. Splitting parameters were determined for a subset of local earthquakes recorded from 2000-2010 at Yellowstone. Analysis was automated using an unsupervised cluster analysis technique to determine optimum splitting parameters from 270 analysis windows for each event. Six stations clearly exhibit preferential fast polarization values sub-orthogonal to the direction of minimum horizontal compression. Yellowstone deformation results in a local crustal stress field differing from the regional field dominated by NE-SW extension, and fast directions reflect this difference rotating around the caldera maintaining perpendicularity to the rim. One station exhibits temporal variations concordant with identified periods of caldera subsidence and uplift. From splitting measurements, we calculated a crustal anisotropy of ~17-23% and crack density ~0.12-0.17 possibly resulting from stress-aligned fluid filled microcracks in the upper crust and an active hydrothermal system.
Resumo:
BACKGROUND: Exercise capacity after heart transplantation (HTx) remains limited despite normal left ventricular systolic function of the allograft. Various clinical and haemodynamic parameters are predictive of exercise capacity following HTx. However, the predictive significance of chronotropic competence has not been demonstrated unequivocally despite its immediate relevance for cardiac output. AIMS: This study assesses the predictive value of various clinical and haemodynamic parameters for exercise capacity in HTx recipients with complete chronotropic competence evolving within the first 6 postoperative months. METHODS: 51 patients were enrolled in this exercise study. Patients were included when at least >6 months after HTx and without negative chronotropic medication or factors limiting exercise capacity such as significant transplant vasculopathy or allograft rejection. Clinical parameters were obtained by chart review, haemodynamic parameters from current cardiac catheterisation, and exercise capacity was assessed by treadmill stress testing. A stepwise multiple regression model analysed the proportion of the variance explained by the predictive parameters. RESULTS: The mean age of these 51 HTx recipients was 55.4 +/- 13.2 yrs on inclusion, 42 pts were male and the mean time interval after cardiac transplantation was 5.1 +/- 2.8 yrs. Five independent predictors explained 47.5% of the variance observed for peak exercise capacity (adjusted R2 = 0.475). In detail, heart rate response explained 31.6%, male gender 5.2%, age 4.1%, pulmonary vascular resistance 3.7%, and body-mass index 2.9%. CONCLUSION: Heart rate response is one of the most important predictors of exercise capacity in HTx recipients with complete chronotropic competence and without relevant transplant vasculopathy or acute allograft rejection.
Resumo:
BACKGROUND: Published individual-based, dynamic sexual network modelling studies reach different conclusions about the population impact of screening for Chlamydia trachomatis. The objective of this study was to conduct a direct comparison of the effect of organised chlamydia screening in different models. METHODS: Three models simulating population-level sexual behaviour, chlamydia transmission, screening and partner notification were used. Parameters describing a hypothetical annual opportunistic screening program in 16-24 year olds were standardised, whereas other parameters from the three original studies were retained. Model predictions of the change in chlamydia prevalence were compared under a range of scenarios. RESULTS: Initial overall chlamydia prevalence rates were similar in women but not men and there were age and sex-specific differences between models. The number of screening tests carried out was comparable in all models but there were large differences in the predicted impact of screening. After 10 years of screening, the predicted reduction in chlamydia prevalence in women aged 16-44 years ranged from 4% to 85%. Screening men and women had a greater impact than screening women alone in all models. There were marked differences between models in assumptions about treatment seeking and sexual behaviour before the start of the screening intervention. CONCLUSIONS: Future models of chlamydia transmission should be fitted to both incidence and prevalence data. This meta-modelling study provides essential information for explaining differences between published studies and increasing the utility of individual-based chlamydia transmission models for policy making.
Resumo:
DCE-MRI is an important technique in the study of small animal cancer models because its sensitivity to vascular changes opens the possibility of quantitative assessment of early therapeutic response. However, extraction of physiologically descriptive parameters from DCE-MRI data relies upon measurement of the vascular input function (VIF), which represents the contrast agent concentration time course in the blood plasma. This is difficult in small animal models due to artifacts associated with partial volume, inflow enhancement, and the limited temporal resolution achievable with MR imaging. In this work, the development of a suite of techniques for high temporal resolution, artifact resistant measurement of the VIF in mice is described. One obstacle in VIF measurement is inflow enhancement, which decreases the sensitivity of the MR signal to the presence of contrast agent. Because the traditional techniques used to suppress inflow enhancement degrade the achievable spatiotemporal resolution of the pulse sequence, improvements can be achieved by reducing the time required for the suppression. Thus, a novel RF pulse which provides spatial presaturation contemporaneously with the RF excitation was implemented and evaluated. This maximizes the achievable temporal resolution by removing the additional RF and gradient pulses typically required for suppression of inflow enhancement. A second challenge is achieving the temporal resolution required for accurate characterization of the VIF, which exceeds what can be achieved with conventional imaging techniques while maintaining adequate spatial resolution and tumor coverage. Thus, an anatomically constrained reconstruction strategy was developed that allows for sampling of the VIF at extremely high acceleration factors, permitting capture of the initial pass of the contrast agent in mice. Simulation, phantom, and in vivo validation of all components were performed. Finally, the two components were used to perform VIF measurement in the murine heart. An in vivo study of the VIF reproducibility was performed, and an improvement in the measured injection-to-injection variation was observed. This will lead to improvements in the reliability of quantitative DCE-MRI measurements and increase their sensitivity.
Resumo:
BACKGROUND: The incidence of hepatitis C virus (HCV) and hepatocellular carcinoma (HCC) is increasing. The purpose of this study is to establish baseline survival in a medically-underserved population and to evaluate the effect of HCV seropositivity on our patient population. MATERIALS AND METHODS: We reviewed clinicopathologic parameters from a prospective tumor registry and medical records from the Harris County Hospital District (HCHD). Outcomes were compared using Kaplan-Meier survival analysis and log-rank tests. RESULTS: A total of 298 HCC patients were identified. The median survival for the entire cohort was 3.4 mo. There was no difference in survival between the HCV seropositive and the HCV seronegative groups (3.6 mo versus 2.6 mo, P = 0.7). Patients with a survival <1 mo had a significant increase in>αfetoprotein (AFP), international normalized ratio (INR), model for end-stage liver disease (MELD) score, and total bilirubin and decrease in albumin compared with patients with a survival ≥ 1 mo. CONCLUSIONS: Survival for HCC patients in the HCHD is extremely poor compared with an anticipated median survival of 7 mo reported in other studies. HCV seropositive patients have no survival advantage over HCV seronegative patients. Poorer liver function at diagnosis appears to be related to shorter survival. Further analysis into variables contributing to decreased survival is needed.
Resumo:
A patient classification system was developed integrating a patient acuity instrument with a computerized nursing distribution method based on a linear programming model. The system was designed for real-time measurement of patient acuity (workload) and allocation of nursing personnel to optimize the utilization of resources.^ The acuity instrument was a prototype tool with eight categories of patients defined by patient severity and nursing intensity parameters. From this tool, the demand for nursing care was defined in patient points with one point equal to one hour of RN time. Validity and reliability of the instrument was determined as follows: (1) Content validity by a panel of expert nurses; (2) predictive validity through a paired t-test analysis of preshift and postshift categorization of patients; (3) initial reliability by a one month pilot of the instrument in a practice setting; and (4) interrater reliability by the Kappa statistic.^ The nursing distribution system was a linear programming model using a branch and bound technique for obtaining integer solutions. The objective function was to minimize the total number of nursing personnel used by optimally assigning the staff to meet the acuity needs of the units. A penalty weight was used as a coefficient of the objective function variables to define priorities for allocation of staff.^ The demand constraints were requirements to meet the total acuity points needed for each unit and to have a minimum number of RNs on each unit. Supply constraints were: (1) total availability of each type of staff and the value of that staff member (value was determined relative to that type of staff's ability to perform the job function of an RN (i.e., value for eight hours RN = 8 points, LVN = 6 points); (2) number of personnel available for floating between units.^ The capability of the model to assign staff quantitatively and qualitatively equal to the manual method was established by a thirty day comparison. Sensitivity testing demonstrated appropriate adjustment of the optimal solution to changes in penalty coefficients in the objective function and to acuity totals in the demand constraints.^ Further investigation of the model documented: correct adjustment of assignments in response to staff value changes; and cost minimization by an addition of a dollar coefficient to the objective function. ^
Resumo:
Environmental data sets of pollutant concentrations in air, water, and soil frequently include unquantified sample values reported only as being below the analytical method detection limit. These values, referred to as censored values, should be considered in the estimation of distribution parameters as each represents some value of pollutant concentration between zero and the detection limit. Most of the currently accepted methods for estimating the population parameters of environmental data sets containing censored values rely upon the assumption of an underlying normal (or transformed normal) distribution. This assumption can result in unacceptable levels of error in parameter estimation due to the unbounded left tail of the normal distribution. With the beta distribution, which is bounded by the same range of a distribution of concentrations, $\rm\lbrack0\le x\le1\rbrack,$ parameter estimation errors resulting from improper distribution bounds are avoided. This work developed a method that uses the beta distribution to estimate population parameters from censored environmental data sets and evaluated its performance in comparison to currently accepted methods that rely upon an underlying normal (or transformed normal) distribution. Data sets were generated assuming typical values encountered in environmental pollutant evaluation for mean, standard deviation, and number of variates. For each set of model values, data sets were generated assuming that the data was distributed either normally, lognormally, or according to a beta distribution. For varying levels of censoring, two established methods of parameter estimation, regression on normal ordered statistics, and regression on lognormal ordered statistics, were used to estimate the known mean and standard deviation of each data set. The method developed for this study, employing a beta distribution assumption, was also used to estimate parameters and the relative accuracy of all three methods were compared. For data sets of all three distribution types, and for censoring levels up to 50%, the performance of the new method equaled, if not exceeded, the performance of the two established methods. Because of its robustness in parameter estimation regardless of distribution type or censoring level, the method employing the beta distribution should be considered for full development in estimating parameters for censored environmental data sets. ^