993 resultados para Measurements models
Resumo:
RNAi ist ein bedeutendes Werkzeug zur Funktionsanalyse von Genen und hat großes Potential für den Einsatz in der Therapie. Obwohl effiziente Knockdowns in der Zellkultur erzielt werden, erweist sich eine in vivo Anwendung als schwierig. Die großen Hürden sind dabei der Transport der siRNA ins Zielgewebe und deren voranschreitende Degradierung.rnMarkierte siRNA kann sowohl zur eigenen Integritätsmessung als auch zur Lokalisierung verwendet werden. Zwei Farbstoffe an den jeweiligen 3’- bzw. -5’-Enden des Sense- bzw. Antisense-Stranges erzeugen ein robustes FRET-System (Hirsch et al. 2012). Das Verhältnis von FRET- zu Donor-Signal, das R/G-Ratio, dient zur sensitiven Klassifizierung des Integritätslevels einer siRNA Probe (Järve et al. 2007; Hirsch et al. 2011; Kim et al. 2010). Mit diesem System kann eine Degradierung von weniger als 5 % in der Küvette und in Zellen nachgewiesen werden.rnDie vorliegende Arbeit beschäftigt sich mit der Evaluierung von potentiellen FRET Farbstoffpaaren hinsichtlich deren Eignung für in vitro und in vivo Anwendung. Verschiedenste FRET-Paare, die das gesamte sichtbare Spektrum abdecken, wurden evaluiert und ermöglichen nun die Auswahl eines geeigneten Paares für die jeweilige Anwendung oder Kombination mit anderen Farbstoffen.rnMit Hilfe von Alexa555/Atto647N siRNA wurde ein erfolgreicher Einschluss von siRNA in Liposomen beobachtet. Eine anschließende Evaluierung der RNase-Protektion ergab für Liposomen, Nanohydrogele und kationische Peptide hervorragende protektive Eigenschaften. Basierend auf den Ergebnisse können diese und andere Transportsysteme nun für eine zelluläre Aufnahme optimiert werden.rnAtto488/Atto590 zeigte die besten Eigenschaften für Echtzeit-Integritätsmessungen in der Lebendzellmikroskopie. Verringerte Bleicheigenschaften und minimaler spektraler “Cross-Talk” ermöglichten es, transfizierte Zellen über einen Zeitraum von bis zu 8 Stunden zu beobachten. Mittels Atto488/Atto590 siRNA wurde die Einschleusung und Freisetzung in Zellen in Echtzeit untersucht. Dabei konnten Freisetzung und Verteilung in einzelnen Zellen beobachtet und analysiert werden. rnAuf eine anfängliche Phase mit hoher Freisetzungsrate folgte eine Phase mit geringerer Rate für den restlichen Beobachtungszeitraum. Die durchschnittliche Verweildauer im Zytosol betrug 24 und 58 Minuten, wobei zwischen lang- und kurzanhaltenden Ereignissen unterschieden werden konnte. Obwohl ein Import von siRNA in den Zellkern beobachtet wurde, konnte kein Schema bzw. genauer Zeitpunkt, in Bezug auf den Transfektionszeitraum für diese Ereignisse bestimmt werden. Die beobachteten Freisetzungsprozesse fanden sporadisch statt und Änderungen in der zellulären Verteilung geschahen innerhalb von wenigen Minuten. Einmal freigesetzte siRNA verschwand mit der Zeit wieder aus dem Zytosol und es blieben nur kleine Aggregate von siRNA mit immer noch geringer Integrität zurück.rn
Resumo:
The volcanic aerosol plume resulting from the Eyjafjallajökull eruption in Iceland in April and May 2010 was detected in clear layers above Switzerland during two periods (17–19 April 2010 and 16–19 May 2010). In-situ measurements of the airborne volcanic plume were performed both within ground-based monitoring networks and with a research aircraft up to an altitude of 6000 m a.s.l. The wide range of aerosol and gas phase parameters studied at the high altitude research station Jungfraujoch (3580 m a.s.l.) allowed for an in-depth characterization of the detected volcanic aerosol. Both the data from the Jungfraujoch and the aircraft vertical profiles showed a consistent volcanic ash mode in the aerosol volume size distribution with a mean optical diameter around 3 ± 0.3 μm. These particles were found to have an average chemical composition very similar to the trachyandesite-like composition of rock samples collected near the volcano. Furthermore, chemical processing of volcanic sulfur dioxide into sulfate clearly contributed to the accumulation mode of the aerosol at the Jungfraujoch. The combination of these in-situ data and plume dispersion modeling results showed that a significant portion of the first volcanic aerosol plume reaching Switzerland on 17 April 2010 did not reach the Jungfraujoch directly, but was first dispersed and diluted in the planetary boundary layer. The maximum PM10 mass concentrations at the Jungfraujoch reached 30 μgm−3 and 70 μgm−3 (for 10-min mean values) duri ng the April and May episode, respectively. Even low-altitude monitoring stations registered up to 45 μgm−3 of volcanic ash related PM10 (Basel, Northwestern Switzerland, 18/19 April 2010). The flights with the research aircraft on 17 April 2010 showed one order of magnitude higher number concentrations over the northern Swiss plateau compared to the Jungfraujoch, and a mass concentration of 320 (200–520) μgm−3 on 18 May 2010 over the northwestern Swiss plateau. The presented data significantly contributed to the time-critical assessment of the local ash layer properties during the initial eruption phase. Furthermore, dispersion models benefited from the detailed information on the volcanic aerosol size distribution and its chemical composition.
Resumo:
The relative abundance of the heavy water isotopologue HDO provides a deeper insight into the atmospheric hydrological cycle. The SCanning Imaging Absorption spectroMeter for Atmospheric CartograpHY (SCIAMACHY) allows for global retrievals of the ratio HDO/H2O in the 2.3 micron wavelength range. However, the spectroscopy of water lines in this region remains a large source of uncertainty for these retrievals. We therefore evaluate and improve the water spectroscopy in the range 4174–4300 cm−1 and test if this reduces systematic uncertainties in the SCIAMACHY retrievals of HDO/H2O. We use a laboratory spectrum of water vapour to fit line intensity, air broadening and wavelength shift parameters. The improved spectroscopy is tested on a series of ground-based high resolution FTS spectra as well as on SCIAMACHY retrievals of H2O and the ratio HDO/H2O. We find that the improved spectroscopy leads to lower residuals in the FTS spectra compared to HITRAN 2008 and Jenouvrier et al. (2007) spectroscopy, and the retrievals become more robust against changes in the retrieval window. For both the FTS and SCIAMACHY measurements, the retrieved total H2O columns decrease by 2–4% and we find a negative shift of the HDO/H2O ratio, which for SCIAMACHY is partly compensated by changes in the retrieval setup and calibration software. The updated SCIAMACHY HDO/H2O product shows somewhat steeper latitudinal and temporal gradients and a steeper Rayleigh distillation curve, strengthening previous conclusions that current isotope-enabled general circulation models underestimate the variability in the near-surface HDO/H2O ratio.
Resumo:
Traffic particle concentrations show considerable spatial variability within a metropolitan area. We consider latent variable semiparametric regression models for modeling the spatial and temporal variability of black carbon and elemental carbon concentrations in the greater Boston area. Measurements of these pollutants, which are markers of traffic particles, were obtained from several individual exposure studies conducted at specific household locations as well as 15 ambient monitoring sites in the city. The models allow for both flexible, nonlinear effects of covariates and for unexplained spatial and temporal variability in exposure. In addition, the different individual exposure studies recorded different surrogates of traffic particles, with some recording only outdoor concentrations of black or elemental carbon, some recording indoor concentrations of black carbon, and others recording both indoor and outdoor concentrations of black carbon. A joint model for outdoor and indoor exposure that specifies a spatially varying latent variable provides greater spatial coverage in the area of interest. We propose a penalised spline formation of the model that relates to generalised kringing of the latent traffic pollution variable and leads to a natural Bayesian Markov Chain Monte Carlo algorithm for model fitting. We propose methods that allow us to control the degress of freedom of the smoother in a Bayesian framework. Finally, we present results from an analysis that applies the model to data from summer and winter separately
Resumo:
In evaluating the accuracy of diagnosis tests, it is common to apply two imperfect tests jointly or sequentially to a study population. In a recent meta-analysis of the accuracy of microsatellite instability testing (MSI) and traditional mutation analysis (MUT) in predicting germline mutations of the mismatch repair (MMR) genes, a Bayesian approach (Chen, Watson, and Parmigiani 2005) was proposed to handle missing data resulting from partial testing and the lack of a gold standard. In this paper, we demonstrate an improved estimation of the sensitivities and specificities of MSI and MUT by using a nonlinear mixed model and a Bayesian hierarchical model, both of which account for the heterogeneity across studies through study-specific random effects. The methods can be used to estimate the accuracy of two imperfect diagnostic tests in other meta-analyses when the prevalence of disease, the sensitivities and/or the specificities of diagnostic tests are heterogeneous among studies. Furthermore, simulation studies have demonstrated the importance of carefully selecting appropriate random effects on the estimation of diagnostic accuracy measurements in this scenario.
Resumo:
Monte Carlo (code GEANT) produced 6 and 15 MV phase space (PS) data were used to define several simple photon beam models. For creating the PS data the energy of starting electrons hitting the target was tuned to get correct depth dose data compared to measurements. The modeling process used the full PS information within the geometrical boundaries of the beam including all scattered radiation of the accelerator head. Scattered radiation outside the boundaries was neglected. Photons and electrons were assumed to be radiated from point sources. Four different models were investigated which involved different ways to determine the energies and locations of beam particles in the output plane. Depth dose curves, profiles, and relative output factors were calculated with these models for six field sizes from 5x5 to 40x40cm2 and compared to measurements. Model 1 uses a photon energy spectrum independent of location in the PS plane and a constant photon fluence in this plane. Model 2 takes into account the spatial particle fluence distribution in the PS plane. A constant fluence is used again in model 3, but the photon energy spectrum depends upon the off axis position. Model 4, finally uses the spatial particle fluence distribution and off axis dependent photon energy spectra in the PS plane. Depth dose curves and profiles for field sizes up to 10x10cm2 were not model sensitive. Good agreement between measured and calculated depth dose curves and profiles for all field sizes was reached for model 4. However, increasing deviations were found for increasing field sizes for models 1-3. Large deviations resulted for the profiles of models 2 and 3. This is due to the fact that these models overestimate and underestimate the energy fluence at large off axis distances. Relative output factors consistent with measurements resulted only for model 4.
Resumo:
Sensor networks have been an active research area in the past decade due to the variety of their applications. Many research studies have been conducted to solve the problems underlying the middleware services of sensor networks, such as self-deployment, self-localization, and synchronization. With the provided middleware services, sensor networks have grown into a mature technology to be used as a detection and surveillance paradigm for many real-world applications. The individual sensors are small in size. Thus, they can be deployed in areas with limited space to make unobstructed measurements in locations where the traditional centralized systems would have trouble to reach. However, there are a few physical limitations to sensor networks, which can prevent sensors from performing at their maximum potential. Individual sensors have limited power supply, the wireless band can get very cluttered when multiple sensors try to transmit at the same time. Furthermore, the individual sensors have limited communication range, so the network may not have a 1-hop communication topology and routing can be a problem in many cases. Carefully designed algorithms can alleviate the physical limitations of sensor networks, and allow them to be utilized to their full potential. Graphical models are an intuitive choice for designing sensor network algorithms. This thesis focuses on a classic application in sensor networks, detecting and tracking of targets. It develops feasible inference techniques for sensor networks using statistical graphical model inference, binary sensor detection, events isolation and dynamic clustering. The main strategy is to use only binary data for rough global inferences, and then dynamically form small scale clusters around the target for detailed computations. This framework is then extended to network topology manipulation, so that the framework developed can be applied to tracking in different network topology settings. Finally the system was tested in both simulation and real-world environments. The simulations were performed on various network topologies, from regularly distributed networks to randomly distributed networks. The results show that the algorithm performs well in randomly distributed networks, and hence requires minimum deployment effort. The experiments were carried out in both corridor and open space settings. A in-home falling detection system was simulated with real-world settings, it was setup with 30 bumblebee radars and 30 ultrasonic sensors driven by TI EZ430-RF2500 boards scanning a typical 800 sqft apartment. Bumblebee radars are calibrated to detect the falling of human body, and the two-tier tracking algorithm is used on the ultrasonic sensors to track the location of the elderly people.
Resumo:
OBJECTIVES: CD4 cell count and plasma viral load are well known predictors of AIDS and mortality in HIV-1-infected patients treated with combination antiretroviral therapy (cART). This study investigated, in patients treated for at least 3 years, the respective prognostic importance of values measured at cART initiation, and 6 and 36 months later, for AIDS and death. METHODS: Patients from 15 HIV cohorts included in the ART Cohort Collaboration, aged at least 16 years, antiretroviral-naive when they started cART and followed for at least 36 months after start of cART were eligible. RESULTS: Among 14 208 patients, the median CD4 cell counts at 0, 6 and 36 months were 210, 320 and 450 cells/microl, respectively, and 78% of patients achieved viral load less than 500 copies/ml at 6 months. In models adjusted for characteristics at cART initiation and for values at all time points, values at 36 months were the strongest predictors of subsequent rates of AIDS and death. Although CD4 cell count and viral load at cART initiation were no longer prognostic of AIDS or of death after 36 months, viral load at 6 months and change in CD4 cell count from 6 to 36 months were prognostic for rates of AIDS from 36 months. CONCLUSIONS: Although current values of CD4 cell count and HIV-1 RNA are the most important prognostic factors for subsequent AIDS and death rates in HIV-1-infected patients treated with cART, changes in CD4 cell count from 6 to 36 months and the value of 6-month HIV-1 RNA are also prognostic for AIDS.
Resumo:
In this paper two models for the simulation of glucose-insulin metabolism of children with Type 1 diabetes are presented. The models are based on the combined use of Compartmental Models (CMs) and artificial Neural Networks (NNs). Data from children with Type 1 diabetes, stored in a database, have been used as input to the models. The data are taken from four children with Type 1 diabetes and contain information about glucose levels taken from continuous glucose monitoring system, insulin intake and food intake, along with corresponding time. The influences of taken insulin on plasma insulin concentration, as well as the effect of food intake on glucose input into the blood from the gut, are estimated from the CMs. The outputs of CMs, along with previous glucose measurements, are fed to a NN, which provides short-term prediction of glucose values. For comparative reasons two different NN architectures have been tested: a Feed-Forward NN (FFNN) trained with the back-propagation algorithm with adaptive learning rate and momentum, and a Recurrent NN (RNN), trained with the Real Time Recurrent Learning (RTRL) algorithm. The results indicate that the best prediction performance can be achieved by the use of RNN.
Resumo:
Fully coupled climate carbon cycle models are sophisticated tools that are used to predict future climate change and its impact on the land and ocean carbon cycles. These models should be able to adequately represent natural variability, requiring model validation by observations. The present study focuses on the ocean carbon cycle component, in particular the spatial and temporal variability in net primary productivity (PP) and export production (EP) of particulate organic carbon (POC). Results from three coupled climate carbon cycle models (IPSL, MPIM, NCAR) are compared with observation-based estimates derived from satellite measurements of ocean colour and results from inverse modelling (data assimilation). Satellite observations of ocean colour have shown that temporal variability of PP on the global scale is largely dominated by the permanently stratified, low-latitude ocean (Behrenfeld et al., 2006) with stronger stratification (higher sea surface temperature; SST) being associated with negative PP anomalies. Results from all three coupled models confirm the role of the low-latitude, permanently stratified ocean for anomalies in globally integrated PP, but only one model (IPSL) also reproduces the inverse relationship between stratification (SST) and PP. An adequate representation of iron and macronutrient co-limitation of phytoplankton growth in the tropical ocean has shown to be the crucial mechanism determining the capability of the models to reproduce observed interactions between climate and PP.
Resumo:
Correct estimation of the firn lock-in depth is essential for correctly linking gas and ice chronologies in ice core studies. Here, two approaches to constrain the firn depth evolution in Antarctica are presented over the last deglaciation: outputs of a firn densification model, and measurements of δ15N of N2 in air trapped in ice core, assuming that δ15N is only affected by gravitational fractionation in the firn column. Since the firn densification process is largely governed by surface temperature and accumulation rate, we have investigated four ice cores drilled in coastal (Berkner Island, BI, and James Ross Island, JRI) and semi-coastal (TALDICE and EPICA Dronning Maud Land, EDML) Antarctic regions. Combined with available ice core air-δ15N measurements from the EPICA Dome C (EDC) site, the studied regions encompass a large range of surface accumulation rates and temperature conditions. Our δ15N profiles reveal a heterogeneous response of the firn structure to glacial–interglacial climatic changes. While firn densification simulations correctly predict TALDICE δ15N variations, they systematically fail to capture the large millennial-scale δ15N variations measured at BI and the δ15N glacial levels measured at JRI and EDML – a mismatch previously reported for central East Antarctic ice cores. New constraints of the EDML gas–ice depth offset during the Laschamp event (~41 ka) and the last deglaciation do not favour the hypothesis of a large convective zone within the firn as the explanation of the glacial firn model–δ15N data mismatch for this site. While we could not conduct an in-depth study of the influence of impurities in snow for firnification from the existing datasets, our detailed comparison between the δ15N profiles and firn model simulations under different temperature and accumulation rate scenarios suggests that the role of accumulation rate may have been underestimated in the current description of firnification models.
Resumo:
Spectra of K0S mesons and Λ hyperons were measured in p+C interactions at 31 GeV/c with the large acceptance NA61/SHINE spectrometer at the CERN SPS. The data were collected with an isotropic graphite target with a thickness of 4% of a nuclear interaction length. Interaction cross sections, charged pion spectra, and charged kaon spectra were previously measured using the same data set. Results on K0S and Λ production in p+C interactions serve as reference for the understanding of the enhancement of strangeness production in nucleus-nucleus collisions. Moreover, they provide important input for the improvement of neutrino flux predictions for the T2K long baseline neutrino oscillation experiment in Japan. Inclusive production cross sections for K0S and Λ are presented as a function of laboratory momentum in intervals of the laboratory polar angle covering the range from 0 up to 240 mrad. The results are compared with predictions of several hadron production models. The K0S mean multiplicity in production processes
Resumo:
The North Atlantic spring bloom is one of the main events that lead to carbon export to the deep ocean and drive oceanic uptake of CO(2) from the atmosphere. Here we use a suite of physical, bio-optical and chemical measurements made during the 2008 spring bloom to optimize and compare three different models of biological carbon export. The observations are from a Lagrangian float that operated south of Iceland from early April to late June, and were calibrated with ship-based measurements. The simplest model is representative of typical NPZD models used for the North Atlantic, while the most complex model explicitly includes diatoms and the formation of fast sinking diatom aggregates and cysts under silicate limitation. We carried out a variational optimization and error analysis for the biological parameters of all three models, and compared their ability to replicate the observations. The observations were sufficient to constrain most phytoplankton-related model parameters to accuracies of better than 15 %. However, the lack of zooplankton observations leads to large uncertainties in model parameters for grazing. The simulated vertical carbon flux at 100 m depth is similar between models and agrees well with available observations, but at 600 m the simulated flux is larger by a factor of 2.5 to 4.5 for the model with diatom aggregation. While none of the models can be formally rejected based on their misfit with the available observations, the model that includes export by diatom aggregation has a statistically significant better fit to the observations and more accurately represents the mechanisms and timing of carbon export based on observations not included in the optimization. Thus models that accurately simulate the upper 100 m do not necessarily accurately simulate export to deeper depths.
Resumo:
The finite element analysis is an accepted method to predict vertebral body compressive strength. This study compares measurements obtained from in vitro tests with the ones from two different simulation models: clinical quantitative computer tomography (QCT) based homogenized finite element (hFE) models and pre-clinical high-resolution peripheral QCT-based (HR-pQCT) hFE models. About 37 vertebral body sections were prepared by removing end-plates and posterior elements, scanned with QCT (390/450μm voxel size) as well as HR-pQCT (82μm voxel size), and tested in compression up to failure. Non-linear viscous damage hFE models were created from QCT/HT-pQCT images and compared to experimental results based on stiffness and ultimate load. As expected, the predictability of QCT/HR-pQCT-based hFE models for both apparent stiffness (r2=0.685/0.801r2=0.685/0.801) and strength (r2=0.774/0.924r2=0.774/0.924) increased if a better image resolution was used. An analysis of the damage distribution showed similar damage locations for all cases. In conclusion, HR-pQCT-based hFE models increased the predictability considerably and do not need any tuning of input parameters. In contrast, QCT-based hFE models usually need some tuning but are clinically the only possible choice at the moment.
Resumo:
An appreciation of the importance of interactions between microbes and multicellular organisms is currently driving research in biology and biomedicine. Many human diseases involve interactions between the host and the microbiota, so investigating the mechanisms involved is important for human health. Although microbial ecology measurements capture considerable diversity of the communities between individuals, this diversity is highly problematic for reproducible experimental animal models that seek to establish the mechanistic basis for interactions within the overall host-microbial superorganism. Conflicting experimental results may be explained away through unknown differences in the microbiota composition between vivaria or between the microenvironment of different isolated cages. In this position paper, we propose standardised criteria for stabilised and defined experimental animal microbiotas to generate reproducible models of human disease that are suitable for systematic experimentation and are reproducible across different institutions.