935 resultados para Random Walk Models


Relevância:

90.00% 90.00%

Publicador:

Resumo:

A total of 46,089 individual monthly test-day (TD) milk yields (10 test-days), from 7,331 complete first lactations of Holstein cattle were analyzed. A standard multivariate analysis (MV), reduced rank analyses fitting the first 2, 3, and 4 genetic principal components (PC2, PC3, PC4), and analyses that fitted a factor analytic structure considering 2, 3, and 4 factors (FAS2, FAS3, FAS4), were carried out. The models included the random animal genetic effect and fixed effects of the contemporary groups (herd-year-month of test-day), age of cow (linear and quadratic effects), and days in milk (linear effect). The residual covariance matrix was assumed to have full rank. Moreover, 2 random regression models were applied. Variance components were estimated by restricted maximum likelihood method. The heritability estimates ranged from 0.11 to 0.24. The genetic correlation estimates between TD obtained with the PC2 model were higher than those obtained with the MV model, especially on adjacent test-days at the end of lactation close to unity. The results indicate that for the data considered in this study, only 2 principal components are required to summarize the bulk of genetic variation among the 10 traits.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Structural properties of model membranes, such as lipid vesicles, may be investigated through the addition of fluorescent probes. After incorporation, the fluorescent molecules are excited with linearly polarized light and the fluorescence emission is depolarized due to translational as well as rotational diffusion during the lifetime of the excited state. The monitoring of emitted light is undertaken through the technique of time-resolved fluorescence: the intensity of the emitted light informs on fluorescence decay times, and the decay of the components of the emitted light yield rotational correlation times which inform on the fluidity of the medium. The fluorescent molecule DPH, of uniaxial symmetry, is rather hydrophobic and has collinear transition and emission moments. It has been used frequently as a probe for the monitoring of the fluidity of the lipid bilayer along the phase transition of the chains. The interpretation of experimental data requires models for localization of fluorescent molecules as well as for possible restrictions on their movement. In this study, we develop calculations for two models for uniaxial diffusion of fluorescent molecules, such as DPH, suggested in several articles in the literature. A zeroth order test model consists of a free randomly rotating dipole in a homogeneous solution, and serves as the basis for the study of the diffusion of models in anisotropic media. In the second model, we consider random rotations of emitting dipoles distributed within cones with their axes perpendicular to the vesicle spherical geometry. In the third model, the dipole rotates in the plane of the of bilayer spherical geometry, within a movement that might occur between the monolayers forming the bilayer. For each of the models analysed, two methods are used by us in order to analyse the rotational diffusion: (I) solution of the corresponding rotational diffusion equation for a single molecule, taking into account the boundary conditions imposed by the models, for the probability of the fluorescent molecule to be found with a given configuration at time t. Considering the distribution of molecules in the geometry proposed, we obtain the analytical expression for the fluorescence anisotropy, except for the cone geometry, for which the solution is obtained numerically; (II) numerical simulations of a restricted rotational random walk in the two geometries corresponding to the two models. The latter method may be very useful in the cases of low-symmetry geometries or of composed geometries.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Questa tesi si inserisce nell'ambito delle analisi statistiche e dei metodi stocastici applicati all'analisi delle sequenze di DNA. Nello specifico il nostro lavoro è incentrato sullo studio del dinucleotide CG (CpG) all'interno del genoma umano, che si trova raggruppato in zone specifiche denominate CpG islands. Queste sono legate alla metilazione del DNA, un processo che riveste un ruolo fondamentale nella regolazione genica. La prima parte dello studio è dedicata a una caratterizzazione globale del contenuto e della distribuzione dei 16 diversi dinucleotidi all'interno del genoma umano: in particolare viene studiata la distribuzione delle distanze tra occorrenze successive dello stesso dinucleotide lungo la sequenza. I risultati vengono confrontati con diversi modelli nulli: sequenze random generate con catene di Markov di ordine zero (basate sulle frequenze relative dei nucleotidi) e uno (basate sulle probabilità di transizione tra diversi nucleotidi) e la distribuzione geometrica per le distanze. Da questa analisi le proprietà caratteristiche del dinucleotide CpG emergono chiaramente, sia dal confronto con gli altri dinucleotidi che con i modelli random. A seguito di questa prima parte abbiamo scelto di concentrare le successive analisi in zone di interesse biologico, studiando l’abbondanza e la distribuzione di CpG al loro interno (CpG islands, promotori e Lamina Associated Domains). Nei primi due casi si osserva un forte arricchimento nel contenuto di CpG, e la distribuzione delle distanze è spostata verso valori inferiori, indicando che questo dinucleotide è clusterizzato. All’interno delle LADs si trovano mediamente meno CpG e questi presentano distanze maggiori. Infine abbiamo adottato una rappresentazione a random walk del DNA, costruita in base al posizionamento dei dinucleotidi: il walk ottenuto presenta caratteristiche drasticamente diverse all’interno e all’esterno di zone annotate come CpG island. Riteniamo pertanto che metodi basati su questo approccio potrebbero essere sfruttati per migliorare l’individuazione di queste aree di interesse nel genoma umano e di altri organismi.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To enhance understanding of the metabolic indicators of type 2 diabetes mellitus (T2DM) disease pathogenesis and progression, the urinary metabolomes of well characterized rhesus macaques (normal or spontaneously and naturally diabetic) were examined. High-resolution ultra-performance liquid chromatography coupled with the accurate mass determination of time-of-flight mass spectrometry was used to analyze spot urine samples from normal (n = 10) and T2DM (n = 11) male monkeys. The machine-learning algorithm random forests classified urine samples as either from normal or T2DM monkeys. The metabolites important for developing the classifier were further examined for their biological significance. Random forests models had a misclassification error of less than 5%. Metabolites were identified based on accurate masses (<10 ppm) and confirmed by tandem mass spectrometry of authentic compounds. Urinary compounds significantly increased (p < 0.05) in the T2DM when compared with the normal group included glycine betaine (9-fold), citric acid (2.8-fold), kynurenic acid (1.8-fold), glucose (68-fold), and pipecolic acid (6.5-fold). When compared with the conventional definition of T2DM, the metabolites were also useful in defining the T2DM condition, and the urinary elevations in glycine betaine and pipecolic acid (as well as proline) indicated defective re-absorption in the kidney proximal tubules by SLC6A20, a Na(+)-dependent transporter. The mRNA levels of SLC6A20 were significantly reduced in the kidneys of monkeys with T2DM. These observations were validated in the db/db mouse model of T2DM. This study provides convincing evidence of the power of metabolomics for identifying functional changes at many levels in the omics pipeline.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The use of group-randomized trials is particularly widespread in the evaluation of health care, educational, and screening strategies. Group-randomized trials represent a subset of a larger class of designs often labeled nested, hierarchical, or multilevel and are characterized by the randomization of intact social units or groups, rather than individuals. The application of random effects models to group-randomized trials requires the specification of fixed and random components of the model. The underlying assumption is usually that these random components are normally distributed. This research is intended to determine if the Type I error rate and power are affected when the assumption of normality for the random component representing the group effect is violated. ^ In this study, simulated data are used to examine the Type I error rate, power, bias and mean squared error of the estimates of the fixed effect and the observed intraclass correlation coefficient (ICC) when the random component representing the group effect possess distributions with non-normal characteristics, such as heavy tails or severe skewness. The simulated data are generated with various characteristics (e.g. number of schools per condition, number of students per school, and several within school ICCs) observed in most small, school-based, group-randomized trials. The analysis is carried out using SAS PROC MIXED, Version 6.12, with random effects specified in a random statement and restricted maximum likelihood (REML) estimation specified. The results from the non-normally distributed data are compared to the results obtained from the analysis of data with similar design characteristics but normally distributed random effects. ^ The results suggest that the violation of the normality assumption for the group component by a skewed or heavy-tailed distribution does not appear to influence the estimation of the fixed effect, Type I error, and power. Negative biases were detected when estimating the sample ICC and dramatically increased in magnitude as the true ICC increased. These biases were not as pronounced when the true ICC was within the range observed in most group-randomized trials (i.e. 0.00 to 0.05). The normally distributed group effect also resulted in bias ICC estimates when the true ICC was greater than 0.05. However, this may be a result of higher correlation within the data. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Since no single experimental or modeling technique provides data that allow a description of transport processes in clays and clay minerals at all relevant scales, several complementary approaches have to be combined to understand and explain the interplay between transport relevant phenomena. In this paper molecular dynamics simulations (MD) were used to investigate the mobility of water in the interlayer of montmorillonite (Mt), and to estimate the influence of mineral surfaces and interlayer ions on the water diffusion. Random Walk (RW) simulations based on a simplified representation of pore space in Mt were used to estimate and understand the effect of the arrangement of Mt particles on the meso- to macroscopic diffusivity of water. These theoretical calculations were complemented with quasielastic neutron scattering (QENS) measurements of aqueous diffusion in Mt with two pseudo-layers of water performed at four significantly different energy resolutions (i.e. observation times). The size of the interlayer and the size of Mt particles are two characteristic dimensions which determine the time dependent behavior of water diffusion in Mt. MD simulations show that at very short time scales water dynamics has the characteristic features of an oscillatory motion in the cage formed by neighbors in the first coordination shell. At longer time scales, the interaction of water with the surface determines the water dynamics, and the effect of confinement on the overall water mobility within the interlayer becomes evident. At time scales corresponding to an average water displacement equivalent to the average size of Mt particles, the effects of tortuosity are observed in the meso- to macroscopic pore scale simulations. Consistent with the picture obtained in the simulations, the QENS data can be described using a (local) 3D diffusion at short observation times, whereas at sufficiently long observation times a 2D diffusive motion is clearly observed. The effects of tortuosity measured in macroscopic tracer diffusion experiments are in qualitative agreement with RW simulations. By using experimental data to calibrate molecular and mesoscopic theoretical models, a consistent description of water mobility in clay minerals from the molecular to the macroscopic scale can be achieved. In turn, simulations help in choosing optimal conditions for the experimental measurements and the data interpretation. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND Non-steroidal anti-inflammatory drugs (NSAIDs) are the backbone of osteoarthritis pain management. We aimed to assess the effectiveness of different preparations and doses of NSAIDs on osteoarthritis pain in a network meta-analysis. METHODS For this network meta-analysis, we considered randomised trials comparing any of the following interventions: NSAIDs, paracetamol, or placebo, for the treatment of osteoarthritis pain. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) and the reference lists of relevant articles for trials published between Jan 1, 1980, and Feb 24, 2015, with at least 100 patients per group. The prespecified primary and secondary outcomes were pain and physical function, and were extracted in duplicate for up to seven timepoints after the start of treatment. We used an extension of multivariable Bayesian random effects models for mixed multiple treatment comparisons with a random effect at the level of trials. For the primary analysis, a random walk of first order was used to account for multiple follow-up outcome data within a trial. Preparations that used different total daily dose were considered separately in the analysis. To assess a potential dose-response relation, we used preparation-specific covariates assuming linearity on log relative dose. FINDINGS We identified 8973 manuscripts from our search, of which 74 randomised trials with a total of 58 556 patients were included in this analysis. 23 nodes concerning seven different NSAIDs or paracetamol with specific daily dose of administration or placebo were considered. All preparations, irrespective of dose, improved point estimates of pain symptoms when compared with placebo. For six interventions (diclofenac 150 mg/day, etoricoxib 30 mg/day, 60 mg/day, and 90 mg/day, and rofecoxib 25 mg/day and 50 mg/day), the probability that the difference to placebo is at or below a prespecified minimum clinically important effect for pain reduction (effect size [ES] -0·37) was at least 95%. Among maximally approved daily doses, diclofenac 150 mg/day (ES -0·57, 95% credibility interval [CrI] -0·69 to -0·46) and etoricoxib 60 mg/day (ES -0·58, -0·73 to -0·43) had the highest probability to be the best intervention, both with 100% probability to reach the minimum clinically important difference. Treatment effects increased as drug dose increased, but corresponding tests for a linear dose effect were significant only for celecoxib (p=0·030), diclofenac (p=0·031), and naproxen (p=0·026). We found no evidence that treatment effects varied over the duration of treatment. Model fit was good, and between-trial heterogeneity and inconsistency were low in all analyses. All trials were deemed to have a low risk of bias for blinding of patients. Effect estimates did not change in sensitivity analyses with two additional statistical models and accounting for methodological quality criteria in meta-regression analysis. INTERPRETATION On the basis of the available data, we see no role for single-agent paracetamol for the treatment of patients with osteoarthritis irrespective of dose. We provide sound evidence that diclofenac 150 mg/day is the most effective NSAID available at present, in terms of improving both pain and function. Nevertheless, in view of the safety profile of these drugs, physicians need to consider our results together with all known safety information when selecting the preparation and dose for individual patients. FUNDING Swiss National Science Foundation (grant number 405340-104762) and Arco Foundation, Switzerland.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents an algorithm for generating scale-free networks with adjustable clustering coefficient. The algorithm is based on a random walk procedure combined with a triangle generation scheme which takes into account genetic factors; this way, preferential attachment and clustering control are implemented using only local information. Simulations are presented which support the validity of the scheme, characterizing its tuning capabilities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We propose distributed algorithms for sampling networks based on a new class of random walks that we call Centrifugal Random Walks (CRW). A CRW is a random walk that starts at a source and always moves away from it. We propose CRW algorithms for connected networks with arbitrary probability distributions, and for grids and networks with regular concentric connectivity with distance based distributions. All CRW sampling algorithms select a node with the exact probability distribution, do not need warm-up, and end in a number of hops bounded by the network diameter.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sampling a network with a given probability distribution has been identified as a useful operation. In this paper we propose distributed algorithms for sampling networks, so that nodes are selected by a special node, called the source, with a given probability distribution. All these algorithms are based on a new class of random walks, that we call Random Centrifugal Walks (RCW). A RCW is a random walk that starts at the source and always moves away from it. Firstly, an algorithm to sample any connected network using RCW is proposed. The algorithm assumes that each node has a weight, so that the sampling process must select a node with a probability proportional to its weight. This algorithm requires a preprocessing phase before the sampling of nodes. In particular, a minimum diameter spanning tree (MDST) is created in the network, and then nodes weights are efficiently aggregated using the tree. The good news are that the preprocessing is done only once, regardless of the number of sources and the number of samples taken from the network. After that, every sample is done with a RCW whose length is bounded by the network diameter. Secondly, RCW algorithms that do not require preprocessing are proposed for grids and networks with regular concentric connectivity, for the case when the probability of selecting a node is a function of its distance to the source. The key features of the RCW algorithms (unlike previous Markovian approaches) are that (1) they do not need to warm-up (stabilize), (2) the sampling always finishes in a number of hops bounded by the network diameter, and (3) it selects a node with the exact probability distribution.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

En esta tesis se va a describir y aplicar de forma novedosa la técnica del alisado exponencial multivariante a la predicción a corto plazo, a un día vista, de los precios horarios de la electricidad, un problema que se está estudiando intensivamente en la literatura estadística y económica reciente. Se van a demostrar ciertas propiedades interesantes del alisado exponencial multivariante que permiten reducir el número de parámetros para caracterizar la serie temporal y que al mismo tiempo permiten realizar un análisis dinámico factorial de la serie de precios horarios de la electricidad. En particular, este proceso multivariante de elevada dimensión se estimará descomponiéndolo en un número reducido de procesos univariantes independientes de alisado exponencial caracterizado cada uno por un solo parámetro de suavizado que variará entre cero (proceso de ruido blanco) y uno (paseo aleatorio). Para ello, se utilizará la formulación en el espacio de los estados para la estimación del modelo, ya que ello permite conectar esa secuencia de modelos univariantes más eficientes con el modelo multivariante. De manera novedosa, las relaciones entre los dos modelos se obtienen a partir de un simple tratamiento algebraico sin requerir la aplicación del filtro de Kalman. De este modo, se podrán analizar y poner al descubierto las razones últimas de la dinámica de precios de la electricidad. Por otra parte, la vertiente práctica de esta metodología se pondrá de manifiesto con su aplicación práctica a ciertos mercados eléctricos spot, tales como Omel, Powernext y Nord Pool. En los citados mercados se caracterizará la evolución de los precios horarios y se establecerán sus predicciones comparándolas con las de otras técnicas de predicción. ABSTRACT This thesis describes and applies the multivariate exponential smoothing technique to the day-ahead forecast of the hourly prices of electricity in a whole new way. This problem is being studied intensively in recent statistics and economics literature. It will start by demonstrating some interesting properties of the multivariate exponential smoothing that reduce drastically the number of parameters to characterize the time series and that at the same time allow a dynamic factor analysis of the hourly prices of electricity series. In particular this very complex multivariate process of dimension 24 will be estimated by decomposing a very reduced number of univariate independent of exponentially smoothing processes each characterized by a single smoothing parameter that varies between zero (white noise process) and one (random walk). To this end, the formulation is used in the state space model for the estimation, since this connects the sequence of efficient univariate models to the multivariate model. Through a novel way, relations between the two models are obtained from a simple algebraic treatment without applying the Kalman filter. Thus, we will analyze and expose the ultimate reasons for the dynamics of the electricity price. Moreover, the practical aspect of this methodology will be shown by applying this new technique to certain electricity spot markets such as Omel, Powernext and Nord Pool. In those markets the behavior of prices will be characterized, their predictions will be formulated and the results will be compared with those of other forecasting techniques.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A quantum random walk on the integers exhibits pseudo memory effects, in that its probability distribution after N steps is determined by reshuffling the first N distributions that arise in a classical random walk with the same initial distribution. In a classical walk, entropy increase can be regarded as a consequence of the majorization ordering of successive distributions. The Lorenz curves of successive distributions for a symmetric quantum walk reveal no majorization ordering in general. Nevertheless, entropy can increase, and computer experiments show that it does so on average. Varying the stages at which the quantum coin system is traced out leads to new quantum walks, including a symmetric walk for which majorization ordering is valid but the spreading rate exceeds that of the usual symmetric quantum walk.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two non-linear techniques, namely, recurrent neural networks and kernel recursive least squares regression - techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation.