26 resultados para Palm Kernel Meal
Resumo:
Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.
Resumo:
Let $Q$ be a suitable real function on $C$. An $n$-Fekete set corresponding to $Q$ is a subset ${Z_{n1}},\dotsb, Z_{nn}}$ of $C$ which maximizes the expression $\Pi^n_i_{
Resumo:
We propose a new kernel estimation of the cumulative distribution function based on transformation and on bias reducing techniques. We derive the optimal bandwidth that minimises the asymptotic integrated mean squared error. The simulation results show that our proposed kernel estimation improves alternative approaches when the variable has an extreme value distribution with heavy tail and the sample size is small.
Resumo:
BACKGROUND AND AIMS: Liver stiffness is increasingly used in the non-invasive evaluation of chronic liver diseases. Liver stiffness correlates with hepatic venous pressure gradient (HVPG) in patients with cirrhosis and holds prognostic value in this population. Hence, accuracy in its measurement is needed. Several factors independent of fibrosis influence liver stiffness, but there is insufficient information on whether meal ingestion modifies liver stiffness in cirrhosis. We investigated the changes in liver stiffness occurring after the ingestion of a liquid standard test meal in this population. METHODS: In 19 patients with cirrhosis and esophageal varices (9 alcoholic, 9 HCV-related, 1 NASH; Child score 6.9±1.8), liver stiffness (transient elastography), portal blood flow (PBF) and hepatic artery blood flow (HABF) (Doppler-Ultrasound) were measured before and 30 minutes after receiving a standard mixed liquid meal. In 10 the HVPG changes were also measured. RESULTS: Post-prandial hyperemia was accompanied by a marked increase in liver stiffness (+27±33%; p<0.0001). Changes in liver stiffness did not correlate with PBF changes, but directly correlated with HABF changes (r = 0.658; p = 0.002). After the meal, those patients showing a decrease in HABF (n = 13) had a less marked increase of liver stiffness as compared to patients in whom HABF increased (n = 6; +12±21% vs. +62±29%,p<0.0001). As expected, post-prandial hyperemia was associated with an increase in HVPG (n = 10; +26±13%, p = 0.003), but changes in liver stiffness did not correlate with HVPG changes. CONCLUSIONS: Liver stiffness increases markedly after a liquid test meal in patients with cirrhosis, suggesting that its measurement should be performed in standardized fasting conditions. The hepatic artery buffer response appears an important factor modulating postprandial changes of liver stiffness. The post-prandial increase in HVPG cannot be predicted by changes in liver stiffness.
Resumo:
This paper shows how a high level matrix programming language may be used to perform Monte Carlo simulation, bootstrapping, estimation by maximum likelihood and GMM, and kernel regression in parallel on symmetric multiprocessor computers or clusters of workstations. The implementation of parallelization is done in a way such that an investigator may use the programs without any knowledge of parallel programming. A bootable CD that allows rapid creation of a cluster for parallel computing is introduced. Examples show that parallelization can lead to important reductions in computational time. Detailed discussion of how the Monte Carlo problem was parallelized is included as an example for learning to write parallel programs for Octave.
Resumo:
This comment corrects the errors in the estimation process that appear in Martins (2001). The first error is in the parametric probit estimation, as the previously presented results do not maximize the log-likelihood function. In the global maximum more variables become significant. As for the semiparametric estimation method, the kernel function used in Martins (2001) can take on both positive and negative values, which implies that the participation probability estimates may be outside the interval [0,1]. We have solved the problem by applying local smoothing in the kernel estimation, as suggested by Klein and Spady (1993).
Resumo:
We show that a particular free-by-cyclic group has CAT(0) dimension equal to 2, but CAT(-1) dimension equal to 3. We also classify the minimal proper 2-dimensional CAT(0) actions of this group; they correspond, up to scaling, to a 1-parameter family of locally CAT(0) piecewise Euclidean metrics on a fixed presentation complex for the group. This information is used to produce an infinite family of 2-dimensional hyperbolic groups, which do not act properly by isometries on any proper CAT(0) metric space of dimension 2. This family includes a free-by-cyclic group with free kernel of rank 6.
Resumo:
We construct generating trees with with one, two, and three labels for some classes of permutations avoiding generalized patterns of length 3 and 4. These trees are built by adding at each level an entry to the right end of the permutation, which allows us to incorporate the adjacency condition about some entries in an occurrence of a generalized pattern. We use these trees to find functional equations for the generating functions enumerating these classes of permutations with respect to different parameters. In several cases we solve them using the kernel method and some ideas of Bousquet-Mélou [2]. We obtain refinements of known enumerative results and find new ones.
Resumo:
Variational steepest descent approximation schemes for the modified Patlak-Keller-Segel equation with a logarithmic interaction kernel in any dimension are considered. We prove the convergence of the suitably interpolated in time implicit Euler scheme, defined in terms of the Euclidean Wasserstein distance, associated to this equation for sub-critical masses. As a consequence, we recover the recent result about the global in time existence of weak-solutions to the modified Patlak-Keller-Segel equation for the logarithmic interaction kernel in any dimension in the sub-critical case. Moreover, we show how this method performs numerically in one dimension. In this particular case, this numerical scheme corresponds to a standard implicit Euler method for the pseudo-inverse of the cumulative distribution function. We demonstrate its capabilities to reproduce easily without the need of mesh-refinement the blow-up of solutions for super-critical masses.
Resumo:
This study examines the evolution of labor productivity across Spanish regions during the period from 1977 to 2002. By applying the kernel technique, we estimate the effects of the Transition process on labor productivity and its main sources. We find that Spanish regions experienced a major convergence process in labor productivity and in human capital in the 1977-1993 period. We also pinpoint the existence of a transition co-movement between labor productivity and human capital. Conversely, the dynamics of investment in physical capital seem unrelated to the transition dynamics of labor productivity. The lack of co-evolution can be addressed as one of the causes of the current slowdown in productivity. Classification-JEL: J24, N34, N940, O18, O52, R10
Resumo:
Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Since conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. Monte Carlo results show that the estimator performs well in comparison to other estimators that have been proposed for estimation of general DLV models.