42 resultados para random coefficient models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe the steady-state function of the ubiquitous mammalian Na/H exchanger (NHE)1 isoform in voltage-clamped Chinese hamster ovary cells, as well as other cells, using oscillating pH-sensitive microelectrodes to quantify proton fluxes via extracellular pH gradients. Giant excised patches could not be used as gigaseal formation disrupts NHE activity within the patch. We first analyzed forward transport at an extracellular pH of 8.2 with no cytoplasmic Na (i.e., nearly zero-trans). The extracellular Na concentration dependence is sigmoidal at a cytoplasmic pH of 6.8 with a Hill coefficient of 1.8. In contrast, at a cytoplasmic pH of 6.0, the Hill coefficient is <1, and Na dependence often appears biphasic. Results are similar for mouse skin fibroblasts and for an opossum kidney cell line that expresses the NHE3 isoform, whereas NHE1(-/-) skin fibroblasts generate no proton fluxes in equivalent experiments. As proton flux is decreased by increasing cytoplasmic pH, the half-maximal concentration (K(1/2)) of extracellular Na decreases less than expected for simple consecutive ion exchange models. The K(1/2) for cytoplasmic protons decreases with increasing extracellular Na, opposite to predictions of consecutive exchange models. For reverse transport, which is robust at a cytoplasmic pH of 7.6, the K(1/2) for extracellular protons decreases only a factor of 0.4 when maximal activity is decreased fivefold by reducing cytoplasmic Na. With 140 mM of extracellular Na and no cytoplasmic Na, the K(1/2) for cytoplasmic protons is 50 nM (pH 7.3; Hill coefficient, 1.5), and activity decreases only 25% with extracellular acidification from 8.5 to 7.2. Most data can be reconstructed with two very different coupled dimer models. In one model, monomers operate independently at low cytoplasmic pH but couple to translocate two ions in "parallel" at alkaline pH. In the second "serial" model, each monomer transports two ions, and translocation by one monomer allosterically promotes translocation by the paired monomer in opposite direction. We conclude that a large fraction of mammalian Na/H activity may occur with a 2Na/2H stoichiometry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Natural IgM containing anti-Gal antibodies initiates classic pathway complement activation in xenotransplantation. However, in ischemia-reperfusion injury, IgM also induces lectin pathway activation. The present study was therefore focused on lectin pathway as well as interaction of IgM and mannose-binding lectin (MBL) in pig-to-human xenotransplantation models. METHODS Activation of the different complement pathways was assessed by cell enzyme-linked immunosorbent assay using human serum on wild-type (WT) and α-galactosyl transferase knockout (GalTKO)/hCD46-transgenic porcine aortic endothelial cells (PAEC). Colocalization of MBL/MASP2 with IgM, C3b/c, C4b/c, and C6 was investigated by immunofluorescence in vitro on PAEC and ex vivo in pig leg xenoperfusion with human blood. Influence of IgM on MBL binding to PAEC was tested using IgM depleted/repleted and anti-Gal immunoabsorbed serum. RESULTS Activation of all the three complement pathways was observed in vitro as indicated by IgM, C1q, MBL, and factor Bb deposition on WT PAEC. MBL deposition colocalized with MASP2 (Manders' coefficient [3D] r=0.93), C3b/c (r=0.84), C4b/c (r=0.86), and C6 (r=0.80). IgM colocalized with MBL (r=0.87) and MASP2 (r=0.83). Human IgM led to dose-dependently increased deposition of MBL, C3b/c, and C6 on WT PAEC. Colocalization of MBL with IgM (Pearson's coefficient [2D] rp=0.88), C3b/c (rp=0.82), C4b/c (rp=0.63), and C6 (rp=0.81) was also seen in ex vivo xenoperfusion. Significantly reduced MBL deposition and complement activation was observed on GalTKO/hCD46-PAEC. CONCLUSION Colocalization of MBL/MASP2 with IgM and complement suggests that the lectin pathway is activated by human anti-Gal IgM and may play a pathophysiologic role in pig-to-human xenotransplantation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several methods based on Kriging have recently been proposed for calculating a probability of failure involving costly-to-evaluate functions. A closely related problem is to estimate the set of inputs leading to a response exceeding a given threshold. Now, estimating such a level set—and not solely its volume—and quantifying uncertainties on it are not straightforward. Here we use notions from random set theory to obtain an estimate of the level set, together with a quantification of estimation uncertainty. We give explicit formulae in the Gaussian process set-up and provide a consistency result. We then illustrate how space-filling versus adaptive design strategies may sequentially reduce level set estimation uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces and analyzes a stochastic search method for parameter estimation in linear regression models in the spirit of Beran and Millar [Ann. Statist. 15(3) (1987) 1131–1154]. The idea is to generate a random finite subset of a parameter space which will automatically contain points which are very close to an unknown true parameter. The motivation for this procedure comes from recent work of Dümbgen et al. [Ann. Statist. 39(2) (2011) 702–730] on regression models with log-concave error distributions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stochastic models for three-dimensional particles have many applications in applied sciences. Lévy–based particle models are a flexible approach to particle modelling. The structure of the random particles is given by a kernel smoothing of a Lévy basis. The models are easy to simulate but statistical inference procedures have not yet received much attention in the literature. The kernel is not always identifiable and we suggest one approach to remedy this problem. We propose a method to draw inference about the kernel from data often used in local stereology and study the performance of our approach in a simulation study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. According to the sequential accretion model (or core-nucleated accretion model), giant planet formation is based first on the formation of a solid core which, when massive enough, can gravitationally bind gas from the nebula to form the envelope. The most critical part of the model is the formation time of the core: to trigger the accretion of gas, the core has to grow up to several Earth masses before the gas component of the protoplanetary disc dissipates. Aims: We calculate planetary formation models including a detailed description of the dynamics of the planetesimal disc, taking into account both gas drag and excitation of forming planets. Methods: We computed the formation of planets, considering the oligarchic regime for the growth of the solid core. Embryos growing in the disc stir their neighbour planetesimals, exciting their relative velocities, which makes accretion more difficult. Here we introduce a more realistic treatment for the evolution of planetesimals' relative velocities, which directly impact on the formation timescale. For this, we computed the excitation state of planetesimals, as a result of stirring by forming planets, and gas-solid interactions. Results: We find that the formation of giant planets is favoured by the accretion of small planetesimals, as their random velocities are more easily damped by the gas drag of the nebula. Moreover, the capture radius of a protoplanet with a (tiny) envelope is also larger for small planetesimals. However, planets migrate as a result of disc-planet angular momentum exchange, with important consequences for their survival: due to the slow growth of a protoplanet in the oligarchic regime, rapid inward type I migration has important implications on intermediate-mass planets that have not yet started their runaway accretion phase of gas. Most of these planets are lost in the central star. Surviving planets have masses either below 10 M⊕ or above several Jupiter masses. Conclusions: To form giant planets before the dissipation of the disc, small planetesimals (~0.1 km) have to be the major contributors of the solid accretion process. However, the combination of oligarchic growth and fast inward migration leads to the absence of intermediate-mass planets. Other processes must therefore be at work to explain the population of extrasolar planets that are presently known.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, the econometrics literature has shown a growing interest in the study of partially identified models, in which the object of economic and statistical interest is a set rather than a point. The characterization of this set and the development of consistent estimators and inference procedures for it with desirable properties are the main goals of partial identification analysis. This review introduces the fundamental tools of the theory of random sets, which brings together elements of topology, convex geometry, and probability theory to develop a coherent mathematical framework to analyze random elements whose realizations are sets. It then elucidates how these tools have been fruitfully applied in econometrics to reach the goals of partial identification analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Seizure freedom in patients suffering from pharmacoresistant epilepsies is still not achieved in 20–30% of all cases. Hence, current therapies need to be improved, based on a more complete understanding of ictogenesis. In this respect, the analysis of functional networks derived from intracranial electroencephalographic (iEEG) data has recently become a standard tool. Functional networks however are purely descriptive models and thus are conceptually unable to predict fundamental features of iEEG time-series, e.g., in the context of therapeutical brain stimulation. In this paper we present some first steps towards overcoming the limitations of functional network analysis, by showing that its results are implied by a simple predictive model of time-sliced iEEG time-series. More specifically, we learn distinct graphical models (so called Chow–Liu (CL) trees) as models for the spatial dependencies between iEEG signals. Bayesian inference is then applied to the CL trees, allowing for an analytic derivation/prediction of functional networks, based on thresholding of the absolute value Pearson correlation coefficient (CC) matrix. Using various measures, the thus obtained networks are then compared to those which were derived in the classical way from the empirical CC-matrix. In the high threshold limit we find (a) an excellent agreement between the two networks and (b) key features of periictal networks as they have previously been reported in the literature. Apart from functional networks, both matrices are also compared element-wise, showing that the CL approach leads to a sparse representation, by setting small correlations to values close to zero while preserving the larger ones. Overall, this paper shows the validity of CL-trees as simple, spatially predictive models for periictal iEEG data. Moreover, we suggest straightforward generalizations of the CL-approach for modeling also the temporal features of iEEG signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Effects of conspecific neighbours on survival and growth of trees have been found to be related to species abundance. Both positive and negative relationships may explain observed abundance patterns. Surprisingly, it is rarely tested whether such relationships could be biased or even spurious due to transforming neighbourhood variables or influences of spatial aggregation, distance decay of neighbour effects and standardization of effect sizes. To investigate potential biases, communities of 20 identical species were simulated with log-series abundances but without species-specific interactions. No relationship of conspecific neighbour effects on survival or growth with species abundance was expected. Survival and growth of individuals was simulated in random and aggregated spatial patterns using no, linear, or squared distance decay of neighbour effects. Regression coefficients of statistical neighbourhood models were unbiased and unrelated to species abundance. However, variation in the number of conspecific neighbours was positively or negatively related to species abundance depending on transformations of neighbourhood variables, spatial pattern and distance decay. Consequently, effect sizes and standardized regression coefficients, often used in model fitting across large numbers of species, were also positively or negatively related to species abundance depending on transformation of neighbourhood variables, spatial pattern and distance decay. Tests using randomized tree positions and identities provide the best benchmarks by which to critically evaluate relationships of effect sizes or standardized regression coefficients with tree species abundance. This will better guard against potential misinterpretations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We estimate the momentum diffusion coefficient of a heavy quark within a pure SU(3) plasma at a temperature of about 1.5Tc. Large-scale Monte Carlo simulations on a series of lattices extending up to 1923×48 permit us to carry out a continuum extrapolation of the so-called color-electric imaginary-time correlator. The extrapolated correlator is analyzed with the help of theoretically motivated models for the corresponding spectral function. Evidence for a nonzero transport coefficient is found and, incorporating systematic uncertainties reflecting model assumptions, we obtain κ=(1.8–3.4)T3. This implies that the “drag coefficient,” characterizing the time scale at which heavy quarks adjust to hydrodynamic flow, is η−1D=(1.8–3.4)(Tc/T)2(M/1.5  GeV)  fm/c, where M is the heavy quark kinetic mass. The results apply to bottom and, with somewhat larger systematic uncertainties, to charm quarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data. METHODS Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch) were tested using eight pairs of pre-existing CT data (pre- and post-treatment). These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D) between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses. RESULTS There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05). The AC + F technique was the most accurate (D<0.17 mm), as expected, followed by AC and BZ superimpositions that presented similar level of accuracy (D<0.5 mm). 3P and 1Z were the least accurate superimpositions (0.790.05), the detected structural changes differed significantly between different techniques (p<0.05). Bland-Altman difference plots showed that BZ superimposition was comparable to AC, though it presented slightly higher random error. CONCLUSIONS Superimposition of 3D datasets using surface models created from voxel data can provide accurate, precise, and reproducible results, offering also high efficiency and increased post-processing capabilities. In the present study population, the BZ superimposition was comparable to AC, with the added advantage of being applicable to scans with a smaller field of view.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The FANOVA (or “Sobol’-Hoeffding”) decomposition of multivariate functions has been used for high-dimensional model representation and global sensitivity analysis. When the objective function f has no simple analytic form and is costly to evaluate, computing FANOVA terms may be unaffordable due to numerical integration costs. Several approximate approaches relying on Gaussian random field (GRF) models have been proposed to alleviate these costs, where f is substituted by a (kriging) predictor or by conditional simulations. Here we focus on FANOVA decompositions of GRF sample paths, and we notably introduce an associated kernel decomposition into 4 d 4d terms called KANOVA. An interpretation in terms of tensor product projections is obtained, and it is shown that projected kernels control both the sparsity of GRF sample paths and the dependence structure between FANOVA effects. Applications on simulated data show the relevance of the approach for designing new classes of covariance kernels dedicated to high-dimensional kriging.