45 resultados para Discrete Markov Random Field Modeling
Resumo:
OBJECTIVE: Hierarchical modeling has been proposed as a solution to the multiple exposure problem. We estimate associations between metabolic syndrome and different components of antiretroviral therapy using both conventional and hierarchical models. STUDY DESIGN AND SETTING: We use discrete time survival analysis to estimate the association between metabolic syndrome and cumulative exposure to 16 antiretrovirals from four drug classes. We fit a hierarchical model where the drug class provides a prior model of the association between metabolic syndrome and exposure to each antiretroviral. RESULTS: One thousand two hundred and eighteen patients were followed for a median of 27 months, with 242 cases of metabolic syndrome (20%) at a rate of 7.5 cases per 100 patient years. Metabolic syndrome was more likely to develop in patients exposed to stavudine, but was less likely to develop in those exposed to atazanavir. The estimate for exposure to atazanavir increased from hazard ratio of 0.06 per 6 months' use in the conventional model to 0.37 in the hierarchical model (or from 0.57 to 0.81 when using spline-based covariate adjustment). CONCLUSION: These results are consistent with trials that show the disadvantage of stavudine and advantage of atazanavir relative to other drugs in their respective classes. The hierarchical model gave more plausible results than the equivalent conventional model.
Resumo:
Monte Carlo simulation was used to evaluate properties of a simple Bayesian MCMC analysis of the random effects model for single group Cormack-Jolly-Seber capture-recapture data. The MCMC method is applied to the model via a logit link, so parameters p, S are on a logit scale, where logit(S) is assumed to have, and is generated from, a normal distribution with mean μ and variance σ2 . Marginal prior distributions on logit(p) and μ were independent normal with mean zero and standard deviation 1.75 for logit(p) and 100 for μ ; hence minimally informative. Marginal prior distribution on σ2 was placed on τ2=1/σ2 as a gamma distribution with α=β=0.001 . The study design has 432 points spread over 5 factors: occasions (t) , new releases per occasion (u), p, μ , and σ . At each design point 100 independent trials were completed (hence 43,200 trials in total), each with sample size n=10,000 from the parameter posterior distribution. At 128 of these design points comparisons are made to previously reported results from a method of moments procedure. We looked at properties of point and interval inference on μ , and σ based on the posterior mean, median, and mode and equal-tailed 95% credibility interval. Bayesian inference did very well for the parameter μ , but under the conditions used here, MCMC inference performance for σ was mixed: poor for sparse data (i.e., only 7 occasions) or σ=0 , but good when there were sufficient data and not small σ .
Resumo:
The study is based on experimental work conducted in alpine snow. We made microwave radiometric and near-infrared reflectance measurements of snow slabs under different experimental conditions. We used an empirical relation to link near-infrared reflectance of snow to the specific surface area (SSA), and converted the SSA into the correlation length. From the measurements of snow radiances at 21 and 35 GHz , we derived the microwave scattering coefficient by inverting two coupled radiative transfer models (the sandwich and six-flux model). The correlation lengths found are in the same range as those determined in the literature using cold laboratory work. The technique shows great potential in the determination of the snow correlation length under field conditions.
Resumo:
Covert brain activity related to task-free, spontaneous (i.e. unrequested), emotional evaluation of human face images was analysed in 27-channel averaged event-related potential (ERP) map series recorded from 18 healthy subjects while observing random sequences of face images without further instructions. After recording, subjects self-rated each face image on a scale from “liked” to “disliked”. These ratings were used to dichotomize the face images into the affective evaluation categories of “liked” and “disliked” for each subject and the subjects into the affective attitudes of “philanthropists” and “misanthropists” (depending on their mean rating across images). Event-related map series were averaged for “liked” and “disliked” face images and for “philanthropists” and “misanthropists”. The spatial configuration (landscape) of the electric field maps was assessed numerically by the electric gravity center, a conservative estimate of the mean location of all intracerebral, active, electric sources. Differences in electric gravity center location indicate activity of different neuronal populations. The electric gravity center locations of all event-related maps were averaged over the entire stimulus-on time (450 ms). The mean electric gravity center for disliked faces was located (significant across subjects) more to the right and somewhat more posterior than for liked faces. Similar differences were found between the mean electric gravity centers of misanthropists (more right and posterior) and philanthropists. Our neurophysiological findings are in line with neuropsychological findings, revealing visual emotional processing to depend on affective evaluation category and affective attitude, and extending the conclusions to a paradigm without directed task.
Resumo:
Vibrations, Posture, and the Stabilization of Gaze: An Experimental Study on Impedance Control R. KREDEL, A. GRIMM & E.-J. HOSSNER University of Bern, Switzerland Introduction Franklin and Wolpert (2011) identify impedance control, i.e., the competence to resist changes in position, velocity or acceleration caused by environmental disturbances, as one of five computational mechanisms which allow for skilled and fluent sen-sorimotor behavior. Accordingly, impedance control is of particular interest in situa-tions in which the motor task exhibits unpredictable components as it is the case in downhill biking or downhill skiing. In an experimental study, the question is asked whether impedance control, beyond its benefits for motor control, also helps to stabi-lize gaze what, in turn, may be essential for maintaining other control mechanisms (e.g., the internal modeling of future states) in an optimal range. Method In a 3x2x4 within-subject ANOVA design, 72 participants conducted three tests on visual acuity and contrast (Landolt / Grating and Vernier) in two different postures (standing vs. squat) on a platform vibrating at four different frequencies (ZEPTOR; 0 Hz, 4 Hz, 8 Hz, 12 Hz; no random noise; constant amplitude) in a counterbalanced or-der with 1-minute breaks in-between. In addition, perceived exertion (Borg) was rated by participants after each condition. Results For Landolt and Grating, significant main effects for posture and frequency are re-vealed, representing lower acuity/contrast thresholds for standing and for higher fre-quencies in general, as well as a significant interaction (p < .05), standing for in-creasing posture differences with increasing frequencies. Overall, performance could be maintained at the 0 Hz/standing level up to a frequency of 8 Hz, if bending of the knees was allowed. The fact that this result is not only due to exertion is proved by the Borg ratings showing significant main effects only, i.e., higher exertion scores for standing and for higher frequencies, but no significant interaction (p > .40). The same pattern, although not significant, is revealed for the Vernier test. Discussion Apparently, postures improving impedance control not only turn out to help to resist disturbances but also assist in stabilizing gaze in spite of these perturbations. Con-sequently, studying the interaction of these control mechanisms in complex unpre-dictable environments seems to be a fruitful field of research for the future. References Franklin, D. W., & Wolpert, D. M. (2011). Computational mechanisms of sensorimotor control. Neuron, 72, 425-442.
Resumo:
Groundwater age is a key aspect of production well vulnerability. Public drinking water supply wells typically have long screens and are expected to produce a mixture of groundwater ages. The groundwater age distributions of seven production wells of the Holten well field (Netherlands) were estimated from tritium-helium (3H/3He), krypton-85 (85Kr), and argon-39 (39Ar), using a new application of a discrete age distribution model and existing mathematical models, by minimizing the uncertainty-weighted squared differences of modeled and measured tracer concentrations. The observed tracer concentrations fitted well to a 4-bin discrete age distribution model or a dispersion model with a fraction of old groundwater. Our results show that more than 75 of the water pumped by four shallow production wells has a groundwater age of less than 20 years and these wells are very vulnerable to recent surface contamination. More than 50 of the water pumped by three deep production wells is older than 60 years. 3H/3He samples from short screened monitoring wells surrounding the well field constrained the age stratification in the aquifer. The discrepancy between the age stratification with depth and the groundwater age distribution of the production wells showed that the well field preferentially pumps from the shallow part of the aquifer. The discrete groundwater age distribution model appears to be a suitable approach in settings where the shape of the age distribution cannot be assumed to follow a simple mathematical model, such as a production well field where wells compete for capture area.
Resumo:
In the course of this study, stiffness of a fibril array of mineralized collagen fibrils modeled with a mean field method was validated experimentally at site-matched two levels of tissue hierarchy using mineralized turkey leg tendons (MTLT). The applied modeling approaches allowed to model the properties of this unidirectional tissue from nanoscale (mineralized collagen fibrils) to macroscale (mineralized tendon). At the microlevel, the indentation moduli obtained with a mean field homogenization scheme were compared to the experimental ones obtained with microindentation. At the macrolevel, the macroscopic stiffness predicted with micro finite element (μFE) models was compared to the experimental stiffness measured with uniaxial tensile tests. Elastic properties of the elements in μFE models were injected from the mean field model or two-directional microindentations. Quantitatively, the indentation moduli can be properly predicted with the mean-field models. Local stiffness trends within specific tissue morphologies are very weak, suggesting additional factors responsible for the stiffness variations. At macrolevel, the μFE models underestimate the macroscopic stiffness, as compared to tensile tests, but the correlations are strong.
Resumo:
This study aims at assessing the skill of several climate field reconstruction techniques (CFR) to reconstruct past precipitation over continental Europe and the Mediterranean at seasonal time scales over the last two millennia from proxy records. A number of pseudoproxy experiments are performed within the virtual reality ofa regional paleoclimate simulation at 45 km resolution to analyse different aspects of reconstruction skill. Canonical Correlation Analysis (CCA), two versions of an Analog Method (AM) and Bayesian hierarchical modeling (BHM) are applied to reconstruct precipitation from a synthetic network of pseudoproxies that are contaminated with various types of noise. The skill of the derived reconstructions is assessed through comparison with precipitation simulated by the regional climate model. Unlike BHM, CCA systematically underestimates the variance. The AM can be adjusted to overcome this shortcoming, presenting an intermediate behaviour between the two aforementioned techniques. However, a trade-off between reconstruction-target correlations and reconstructed variance is the drawback of all CFR techniques. CCA (BHM) presents the largest (lowest) skill in preserving the temporal evolution, whereas the AM can be tuned to reproduce better correlation at the expense of losing variance. While BHM has been shown to perform well for temperatures, it relies heavily on prescribed spatial correlation lengths. While this assumption is valid for temperature, it is hardly warranted for precipitation. In general, none of the methods outperforms the other. All experiments agree that a dense and regularly distributed proxy network is required to reconstruct precipitation accurately, reflecting its high spatial and temporal variability. This is especially true in summer, when a specifically short de-correlation distance from the proxy location is caused by localised summertime convective precipitation events.
Resumo:
The time variable Earth’s gravity field contains information about the mass transport within the system Earth, i.e., the relationship between mass variations in the atmosphere, oceans, land hydrology, and ice sheets. For many years, satellite laser ranging (SLR) observations to geodetic satellites have provided valuable information of the low-degree coefficients of the Earth’s gravity field. Today, the Gravity Recovery and Climate Experiment (GRACE) mission is the major source of information for the time variable field of a high spatial resolution. We recover the low-degree coefficients of the time variable Earth’s gravity field using SLR observations up to nine geodetic satellites: LAGEOS-1, LAGEOS-2, Starlette, Stella, AJISAI, LARES, Larets, BLITS, and Beacon-C. We estimate monthly gravity field coefficients up to degree and order 10/10 for the time span 2003–2013 and we compare the results with the GRACE-derived gravity field coefficients. We show that not only degree-2 gravity field coefficients can be well determined from SLR, but also other coefficients up to degree 10 using the combination of short 1-day arcs for low orbiting satellites and 10-day arcs for LAGEOS-1/2. In this way, LAGEOS-1/2 allow recovering zonal terms, which are associated with long-term satellite orbit perturbations, whereas the tesseral and sectorial terms benefit most from low orbiting satellites, whose orbit modeling deficiencies are minimized due to short 1-day arcs. The amplitudes of the annual signal in the low-degree gravity field coefficients derived from SLR agree with GRACE K-band results at a level of 77 %. This implies that SLR has a great potential to fill the gap between the current GRACE and the future GRACE Follow-On mission for recovering of the seasonal variations and secular trends of the longest wavelengths in gravity field, which are associated with the large-scale mass transport in the system Earth.
Resumo:
Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon’s implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike’s preceding ISI. As we show, the EIF’s exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron’s ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational theories about UP states during slow wave sleep and present possible extensions of the model in the context of spike-frequency adaptation.
Resumo:
Sound knowledge of the spatial and temporal patterns of rockfalls is fundamental for the management of this very common hazard in mountain environments. Process-based, three-dimensional simulation models are nowadays capable of reproducing the spatial distribution of rockfall occurrences with reasonable accuracy through the simulation of numerous individual trajectories on highly-resolved digital terrain models. At the same time, however, simulation models typically fail to quantify the ‘real’ frequency of rockfalls (in terms of return intervals). The analysis of impact scars on trees, in contrast, yields real rockfall frequencies, but trees may not be present at the location of interest and rare trajectories may not necessarily be captured due to the limited age of forest stands. In this article, we demonstrate that the coupling of modeling with tree-ring techniques may overcome the limitations inherent to both approaches. Based on the analysis of 64 cells (40 m × 40 m) of a rockfall slope located above a 1631-m long road section in the Swiss Alps, we illustrate results from 488 rockfalls detected in 1260 trees. We illustrate that tree impact data cannot only be used (i) to reconstruct the real frequency of rockfalls for individual cells, but that they also serve (ii) the calibration of the rockfall model Rockyfor3D, as well as (iii) the transformation of simulated trajectories into real frequencies. Calibrated simulation results are in good agreement with real rockfall frequencies and exhibit significant differences in rockfall activity between the cells (zones) along the road section. Real frequencies, expressed as rock passages per meter road section, also enable quantification and direct comparison of the hazard potential between the zones. The contribution provides an approach for hazard zoning procedures that complements traditional methods with a quantification of rockfall frequencies in terms of return intervals through a systematic inclusion of impact records in trees.
Resumo:
Since no single experimental or modeling technique provides data that allow a description of transport processes in clays and clay minerals at all relevant scales, several complementary approaches have to be combined to understand and explain the interplay between transport relevant phenomena. In this paper molecular dynamics simulations (MD) were used to investigate the mobility of water in the interlayer of montmorillonite (Mt), and to estimate the influence of mineral surfaces and interlayer ions on the water diffusion. Random Walk (RW) simulations based on a simplified representation of pore space in Mt were used to estimate and understand the effect of the arrangement of Mt particles on the meso- to macroscopic diffusivity of water. These theoretical calculations were complemented with quasielastic neutron scattering (QENS) measurements of aqueous diffusion in Mt with two pseudo-layers of water performed at four significantly different energy resolutions (i.e. observation times). The size of the interlayer and the size of Mt particles are two characteristic dimensions which determine the time dependent behavior of water diffusion in Mt. MD simulations show that at very short time scales water dynamics has the characteristic features of an oscillatory motion in the cage formed by neighbors in the first coordination shell. At longer time scales, the interaction of water with the surface determines the water dynamics, and the effect of confinement on the overall water mobility within the interlayer becomes evident. At time scales corresponding to an average water displacement equivalent to the average size of Mt particles, the effects of tortuosity are observed in the meso- to macroscopic pore scale simulations. Consistent with the picture obtained in the simulations, the QENS data can be described using a (local) 3D diffusion at short observation times, whereas at sufficiently long observation times a 2D diffusive motion is clearly observed. The effects of tortuosity measured in macroscopic tracer diffusion experiments are in qualitative agreement with RW simulations. By using experimental data to calibrate molecular and mesoscopic theoretical models, a consistent description of water mobility in clay minerals from the molecular to the macroscopic scale can be achieved. In turn, simulations help in choosing optimal conditions for the experimental measurements and the data interpretation. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
We explore a generalisation of the L´evy fractional Brownian field on the Euclidean space based on replacing the Euclidean norm with another norm. A characterisation result for admissible norms yields a complete description of all self-similar Gaussian random fields with stationary increments. Several integral representations of the introduced random fields are derived. In a similar vein, several non-Euclidean variants of the fractional Poisson field are introduced and it is shown that they share the covariance structure with the fractional Brownian field and converge to it. The shape parameters of the Poisson and Brownian variants are related by convex geometry transforms, namely the radial pth mean body and the polar projection transforms.
Resumo:
Stable isotope ratios of nitrate preserved in deep ice cores are expected to provide unique and valuable information regarding paleoatmospheric processes. However, due to the post-depositional loss of nitrate in snow, this information may be erased or significantly modified by physical or photochemical processes before preservation in ice. We investigated the role of solar UV photolysis in the post-depositional modification of nitrate mass and stable isotoperatios at Dome C, Antarctica, during the austral summer of 2011/2012. Two 30 cm snow pits were filled with homogenized drifted snow from the vicinity of the base. One of these pits was covered with a plexiglass plate that transmits solar UV radiation, while the other was covered with a different plexiglass plate having a low UV transmittance. Samples were then collected from each pit at a 2–5 cm depth resolution and a 10-day frequency. At the end of the season, acomparable nitrate mass loss was observed in both pits for the top-level samples (0–7 cm) attributed to mixing with the surrounding snow. After excluding samples impacted by the mixing process, we derived an average apparent nitrogen isotopic fractionation (15" app/of role in driving the isotopic fractionation of nitrate in snow.We have estimated a purely photolytic nitrogen isotopic fractionation (15"photo) of -55.8 12.0 ‰ from the difference in the derived apparent isotopic ractionations of the two experimental fields, as both pits were exposed to similar physical processes except exposure to solar UV. This value is in close agreement with the 15" photo value of -47.9 6.8 ‰ derived in a laboratory experiment simulated for Dome C conditions (Berhanu et al., 2014). We have also observed an insensitivity of 15" with depth in the snowpack under the given experimental setup. This is due to the uniform attenuation of incoming solar UV by snow, as 15" is strongly dependent on the spectral distribution of the incoming light flux. Together with earlier work, the results presented here represent a strong body of evidence that solar UV photolysis is the most relevant post-depositional process modifying the stable isotope ratios of snow nitrate at low-accumulation sites, where many deep ice cores are drilled. Nevertheless, modeling the loss of nitrate in snow is still required before a robust interpretation of ice core records can be provided.
Resumo:
The efficiency of sputtered refractory elements by H+ and He++ solar wind ions from Mercury's surface and their contribution to the exosphere are studied for various solar wind conditions. A 3D solar wind-planetary interaction hybrid model is used for the evaluation of precipitation maps of the sputter agents on Mercury's surface. By assuming a global mineralogical surface composition, the related sputter yields are calculated by means of the 2013 SRIM code and are coupled with a 3D exosphere model. Because of Mercury's magnetic field, for quiet and nominal solar wind conditions the plasma can only precipitate around the polar areas, while for extreme solar events (fast solar wind, coronal mass ejections, interplanetary magnetic clouds) the solar wind plasma has access to the entire dayside. In that case the release of particles form the planet's surface can result in an exosphere density increase of more than one order of magnitude. The corresponding escape rates are also about an order of magnitude higher. Moreover, the amount of He++ ions in the precipitating solar plasma flow enhances also the release of sputtered elements from the surface in the exosphere. A comparison of our model results with MESSENGER observations of sputtered Mg and Ca elements in the exosphere shows a reasonable quantitative agreement. (C) 2015 Elsevier Ltd. All rights reserved.