939 resultados para Non-gaussian Random Functions
Resumo:
Effectively assessing subtle hepatic metabolic functions by novel non-invasive tests might be of clinical utility in scoring NAFLD (non-alcoholic fatty liver disease) and in identifying altered metabolic pathways. The present study was conducted on 39 (20 lean and 19 obese) hypertransaminasemic patients with histologically proven NAFLD {ranging from simple steatosis to severe steatohepatitis [NASH (non-alcoholic steatohepatitis)] and fibrosis} and 28 (20 lean and eight overweight) healthy controls, who underwent stable isotope breath testing ([(13)C]methacetin and [(13)C]ketoisocaproate) for microsomal and mitochondrial liver function in relation to histology, serum hyaluronate, as a marker of liver fibrosis, and body size. Compared with healthy subjects and patients with simple steatosis, NASH patients had enhanced methacetin demethylation (P=0.001), but decreased (P=0.001) and delayed (P=0.006) ketoisocaproate decarboxylation, which was inversely related (P=0.001) to the degree of histological fibrosis (r=-0.701), serum hyaluronate (r=-0.644) and body size (r=-0.485). Ketoisocaproate decarboxylation was impaired further in obese patients with NASH, but not in patients with simple steatosis and in overweight controls. NASH and insulin resistance were independently associated with an abnormal ketoisocaproate breath test (P=0.001). The cut-off value of 9.6% cumulative expired (13)CO(2) for ketoisocaproate at 60 min was associated with the highest prediction (positive predictive value, 0.90; negative predictive value, 0.73) for NASH, yielding an overall sensitivity of 68% and specificity of 94%. In conclusion, both microsomal and mitochondrial functions are disturbed in NASH. Therefore stable isotope breath tests may usefully contribute to a better and non-invasive characterization of patients with NAFLD.
Resumo:
The aim of this study is to develop a new simple method for analyzing one-dimensional transcranial magnetic stimulation (TMS) mapping studies in humans. Motor evoked potentials (MEP) were recorded from the abductor pollicis brevis (APB) muscle during stimulation at nine different positions on the scalp along a line passing through the APB hot spot and the vertex. Non-linear curve fitting according to the Levenberg-Marquardt algorithm was performed on the averaged amplitude values obtained at all points to find the best-fitting symmetrical and asymmetrical peak functions. Several peak functions could be fitted to the experimental data. Across all subjects, a symmetric, bell-shaped curve, the complementary error function (erfc) gave the best results. This function is characterized by three parameters giving its amplitude, position, and width. None of the mathematical functions tested with less or more than three parameters fitted better. The amplitude and position parameters of the erfc were highly correlated with the amplitude at the hot spot and with the location of the center of gravity of the TMS curve. In conclusion, non-linear curve fitting is an accurate method for the mathematical characterization of one-dimensional TMS curves. This is the first method that provides information on amplitude, position and width simultaneously.
Resumo:
Generalized linear mixed models with semiparametric random effects are useful in a wide variety of Bayesian applications. When the random effects arise from a mixture of Dirichlet process (MDP) model, normal base measures and Gibbs sampling procedures based on the Pólya urn scheme are often used to simulate posterior draws. These algorithms are applicable in the conjugate case when (for a normal base measure) the likelihood is normal. In the non-conjugate case, the algorithms proposed by MacEachern and Müller (1998) and Neal (2000) are often applied to generate posterior samples. Some common problems associated with simulation algorithms for non-conjugate MDP models include convergence and mixing difficulties. This paper proposes an algorithm based on the Pólya urn scheme that extends the Gibbs sampling algorithms to non-conjugate models with normal base measures and exponential family likelihoods. The algorithm proceeds by making Laplace approximations to the likelihood function, thereby reducing the procedure to that of conjugate normal MDP models. To ensure the validity of the stationary distribution in the non-conjugate case, the proposals are accepted or rejected by a Metropolis-Hastings step. In the special case where the data are normally distributed, the algorithm is identical to the Gibbs sampler.
Resumo:
BACKGROUND: The arginine-vasopressin 1a receptor has been identified as a key determinant for social behaviour in Microtus voles, humans and other mammals. Nevertheless, the genetic bases of complex phenotypic traits like differences in social and mating behaviour among species and individuals remain largely unknown. Contrary to previous studies focusing on differences in the promotor region of the gene, we investigate here the level of functional variation in the coding region (exon 1) of this locus. RESULTS: We detected high sequence diversity between higher mammalian taxa as well as between species of the genus Microtus. This includes length variation and radical amino acid changes, as well as the presence of distinct protein variants within individuals. Additionally, negative selection prevails on most parts of the first exon of the arginine-vasopressin receptor 1a (avpr1a) gene but it contains regions with higher rates of change that harbour positively selected sites. Synonymous and non-synonymous substitution rates in the avpr1a gene are not exceptional compared to other genes, but they exceed those found in related hormone receptors with similar functions. DISCUSSION: These results stress the importance of considering variation in the coding sequence of avpr1a in regards to associations with life history traits (e.g. social behaviour, mating system, habitat requirements) of voles, other mammals and humans in particular.
Resumo:
Focusing optical beams on a target through random propagation media is very important in many applications such as free space optical communica- tions and laser weapons. Random media effects such as beam spread and scintillation can degrade the optical system's performance severely. Compensation schemes are needed in these applications to overcome these random media effcts. In this research, we investigated the optimal beams for two different optimization criteria: one is to maximize the concentrated received intensity and the other is to minimize the scintillation index at the target plane. In the study of the optimal beam to maximize the weighted integrated intensity, we derive a similarity relationship between pupil-plane phase screen and extended Huygens-Fresnel model, and demonstrate the limited utility of maximizing the average integrated intensity. In the study ofthe optimal beam to minimize the scintillation index, we derive the first- and second-order moments for the integrated intensity of multiple coherent modes. Hermite-Gaussian and Laguerre-Gaussian modes are used as the coherent modes to synthesize an optimal partially coherent beam. The optimal beams demonstrate evident reduction of scintillation index, and prove to be insensitive to the aperture averaging effect.
Resumo:
This dissertation presents a detailed study in exploring quantum correlations of lights in macroscopic environments. We have explored quantum correlations of single photons, weak coherent states, and polarization-correlated/polarization-entangled photons in macroscopic environments. These included macroscopic mirrors, macroscopic photon number, spatially separated observers, noisy photons source and propagation medium with loss or disturbances. We proposed a measurement scheme for observing quantum correlations and entanglement in the spatial properties of two macroscopic mirrors using single photons spatial compass state. We explored the phase space distribution features of spatial compass states, such as chessboard pattern by using the Wigner function. The displacement and tilt correlations of the two mirrors were manifested through the propensities of the compass states. This technique can be used to extract Einstein-Podolsky-Rosen correlations (EPR) of the two mirrors. We then formulated the discrete-like property of the propensity Pb(m,n), which can be used to explore environmental perturbed quantum jumps of the EPR correlations in phase space. With single photons spatial compass state, the variances in position and momentum are much smaller than standard quantum limit when using a Gaussian TEM00 beam. We observed intrinsic quantum correlations of weak coherent states between two parties through balanced homodyne detection. Our scheme can be used as a supplement to decoy-state BB84 protocol and differential phase-shift QKD protocol. We prepared four types of bipartite correlations ±cos2(θ12) that shared between two parties. We also demonstrated bits correlations between two parties separated by 10 km optical fiber. The bits information will be protected by the large quantum phase fluctuation of weak coherent states, adding another physical layer of security to these protocols for quantum key distribution. Using 10 m of highly nonlinear fiber (HNLF) at 77 K, we observed coincidence to accidental-coincidence ratio of 130±5 for correlated photon-pair and Two-Photon Interference visibility >98% entangled photon-pair. We also verified the non-local behavior of polarization-entangled photon pair by violating Clauser-Horne-Shimony-Holt Bell’s inequality by more than 12 standard deviations. With the HNLF at 300 K (77 K), photon-pair production rate about factor 3(2) higher than a 300 m dispersion-shifted fiber is observed. Then, we studied quantum correlation and interference of photon-pairs; with one photon of the photon-air experiencing multiple scattering in a random medium. We observed that depolarization noise photon in multiple scattering degrading the purity of photon-pair, and the existence of Raman noise photon in a photon-pair source will contribute to the depolarization affect. We found that quantum correlation of polarization-entangled photon-pair is better preserved than polarization-correlated photon-pair as one photon of the photon-pair scattered through a random medium. Our findings showed that high purity polarization-entangled photon-pair is better candidate for long distance quantum key distribution.
Resumo:
Despite widespread use of species-area relationships (SARs), dispute remains over the most representative SAR model. Using data of small-scale SARs of Estonian dry grassland communities, we address three questions: (1) Which model describes these SARs best when known artifacts are excluded? (2) How do deviating sampling procedures (marginal instead of central position of the smaller plots in relation to the largest plot; single values instead of average values; randomly located subplots instead of nested subplots) influence the properties of the SARs? (3) Are those effects likely to bias the selection of the best model? Our general dataset consisted of 16 series of nested-plots (1 cm(2)-100 m(2), any-part system), each of which comprised five series of subplots located in the four corners and the centre of the 100-m(2) plot. Data for the three pairs of compared sampling designs were generated from this dataset by subsampling. Five function types (power, quadratic power, logarithmic, Michaelis-Menten, Lomolino) were fitted with non-linear regression. In some of the communities, we found extremely high species densities (including bryophytes and lichens), namely up to eight species in 1 cm(2) and up to 140 species in 100 m(2), which appear to be the highest documented values on these scales. For SARs constructed from nested-plot average-value data, the regular power function generally was the best model, closely followed by the quadratic power function, while the logarithmic and Michaelis-Menten functions performed poorly throughout. However, the relative fit of the latter two models increased significantly relative to the respective best model when the single-value or random-sampling method was applied, however, the power function normally remained far superior. These results confirm the hypothesis that both single-value and random-sampling approaches cause artifacts by increasing stochasticity in the data, which can lead to the selection of inappropriate models.
Resumo:
OBJECTIVE: We examined survival and prognostic factors of patients who developed HIV-associated non-Hodgkin lymphoma (NHL) in the era of combination antiretroviral therapy (cART). DESIGN AND SETTING: Multicohort collaboration of 33 European cohorts. METHODS: We included all cART-naive patients enrolled in cohorts participating in the Collaboration of Observational HIV Epidemiological Research Europe (COHERE) who were aged 16 years or older, started cART at some point after 1 January 1998 and developed NHL after 1 January 1998. Patients had to have a CD4 cell count after 1 January 1998 and one at diagnosis of the NHL. Survival and prognostic factors were estimated using Weibull models, with random effects accounting for heterogeneity between cohorts. RESULTS: Of 67 659 patients who were followed up during 304 940 person-years, 1176 patients were diagnosed with NHL. Eight hundred and forty-seven patients (72%) from 22 cohorts met inclusion criteria. Survival at 1 year was 66% [95% confidence interval (CI) 63-70%] for systemic NHL (n = 763) and 54% (95% CI: 43-65%) for primary brain lymphoma (n = 84). Risk factors for death included low nadir CD4 cell counts and a history of injection drug use. Patients developing NHL on cART had an increased risk of death compared with patients who were cART naive at diagnosis. CONCLUSION: In the era of cART two-thirds of patients diagnosed with HIV-related systemic NHL survive for longer than 1 year after diagnosis. Survival is poorer in patients diagnosed with primary brain lymphoma. More advanced immunodeficiency is the dominant prognostic factor for mortality in patients with HIV-related NHL.
Resumo:
High-resolution and highly precise age models for recent lake sediments (last 100–150 years) are essential for quantitative paleoclimate research. These are particularly important for sedimentological and geochemical proxies, where transfer functions cannot be established and calibration must be based upon the relation of sedimentary records to instrumental data. High-precision dating for the calibration period is most critical as it determines directly the quality of the calibration statistics. Here, as an example, we compare radionuclide age models obtained on two high-elevation glacial lakes in the Central Chilean Andes (Laguna Negra: 33°38′S/70°08′W, 2,680 m a.s.l. and Laguna El Ocho: 34°02′S/70°19′W, 3,250 m a.s.l.). We show the different numerical models that produce accurate age-depth chronologies based on 210Pb profiles, and we explain how to obtain reduced age-error bars at the bottom part of the profiles, i.e., typically around the end of the 19th century. In order to constrain the age models, we propose a method with five steps: (i) sampling at irregularly-spaced intervals for 226Ra, 210Pb and 137Cs depending on the stratigraphy and microfacies, (ii) a systematic comparison of numerical models for the calculation of 210Pb-based age models: constant flux constant sedimentation (CFCS), constant initial concentration (CIC), constant rate of supply (CRS) and sediment isotope tomography (SIT), (iii) numerical constraining of the CRS and SIT models with the 137Cs chronomarker of AD 1964 and, (iv) step-wise cross-validation with independent diagnostic environmental stratigraphic markers of known age (e.g., volcanic ash layer, historical flood and earthquakes). In both examples, we also use airborne pollutants such as spheroidal carbonaceous particles (reflecting the history of fossil fuel emissions), excess atmospheric Cu deposition (reflecting the production history of a large local Cu mine), and turbidites related to historical earthquakes. Our results show that the SIT model constrained with the 137Cs AD 1964 peak performs best over the entire chronological profile (last 100–150 years) and yields the smallest standard deviations for the sediment ages. Such precision is critical for the calibration statistics, and ultimately, for the quality of the quantitative paleoclimate reconstruction. The systematic comparison of CRS and SIT models also helps to validate the robustness of the chronologies in different sections of the profile. Although surprisingly poorly known and under-explored in paleolimnological research, the SIT model has a great potential in paleoclimatological reconstructions based on lake sediments
Resumo:
Fossil pollen data from stratigraphic cores are irregularly spaced in time due to non-linear age-depth relations. Moreover, their marginal distributions may vary over time. We address these features in a nonparametric regression model with errors that are monotone transformations of a latent continuous-time Gaussian process Z(T). Although Z(T) is unobserved, due to monotonicity, under suitable regularity conditions, it can be recovered facilitating further computations such as estimation of the long-memory parameter and the Hermite coefficients. The estimation of Z(T) itself involves estimation of the marginal distribution function of the regression errors. These issues are considered in proposing a plug-in algorithm for optimal bandwidth selection and construction of confidence bands for the trend function. Some high-resolution time series of pollen records from Lago di Origlio in Switzerland, which go back ca. 20,000 years are used to illustrate the methods.
Resumo:
We describe several simulation algorithms that yield random probability distributions with given values of risk measures. In case of vanilla risk measures, the algorithms involve combining and transforming random cumulative distribution functions or random Lorenz curves obtained by simulating rather general random probability distributions on the unit interval. A new algorithm based on the simulation of a weighted barycentres array is suggested to generate random probability distributions with a given value of the spectral risk measure.
Resumo:
We focus on kernels incorporating different kinds of prior knowledge on functions to be approximated by Kriging. A recent result on random fields with paths invariant under a group action is generalised to combinations of composition operators, and a characterisation of kernels leading to random fields with additive paths is obtained as a corollary. A discussion follows on some implications on design of experiments, and it is shown in the case of additive kernels that the so-called class of “axis designs” outperforms Latin hypercubes in terms of the IMSE criterion.
Resumo:
We prove large deviation results for sums of heavy-tailed random elements in rather general convex cones being semigroups equipped with a rescaling operation by positive real numbers. In difference to previous results for the cone of convex sets, our technique does not use the embedding of cones in linear spaces. Examples include the cone of convex sets with the Minkowski addition, positive half-line with maximum operation and the family of square integrable functions with arithmetic addition and argument rescaling.