939 resultados para random walk hypothesis
Resumo:
Query focused summarization is the task of producing a compressed text of original set of documents based on a query. Documents can be viewed as graph with sentences as nodes and edges can be added based on sentence similarity. Graph based ranking algorithms which use 'Biased random surfer model' like topic-sensitive LexRank have been successfully applied to query focused summarization. In these algorithms, random walk will be biased towards the sentences which contain query relevant words. Specifically, it is assumed that random surfer knows the query relevance score of the sentence to where he jumps. However, neighbourhood information of the sentence to where he jumps is completely ignored. In this paper, we propose look-ahead version of topic-sensitive LexRank. We assume that random surfer not only knows the query relevance of the sentence to where he jumps but he can also look N-step ahead from that sentence to find query relevance scores of future set of sentences. Using this look ahead information, we figure out the sentences which are indirectly related to the query by looking at number of hops to reach a sentence which has query relevant words. Then we make the random walk biased towards even to the indirect query relevant sentences along with the sentences which have query relevant words. Experimental results show 20.2% increase in ROUGE-2 score compared to topic-sensitive LexRank on DUC 2007 data set. Further, our system outperforms best systems in DUC 2006 and results are comparable to state of the art systems.
Resumo:
Nuclear pore complexes (NPCs) are very selective filters that sit on the membrane of the nucleus and monitor the transport between the cytoplasm and the nucleoplasm. For the central plug of NPC two models have been suggested in the literature. The first suggests that the plug is a reversible hydrogel while the other suggests that it is a polymer brush. Here we propose a model for the transport of a protein through the plug, which is general enough to cover both the models. The protein stretches the plug and creates a local deformation, which together with the protein, we refer to as the bubble. We start with the free energy for creation of the bubble and consider its motion within the plug. The relevant coordinate is the center of the bubble which executes random walk. We find that for faster relaxation of the gel, the diffusion of the bubble is greater. (C) 2014 Elsevier-B.V. All rights reserved.
Resumo:
The average time tau(r) for one end of a long, self-avoiding polymer to interact for the first time with a flat penetrable surface to which it is attached at the other end is shown here to scale essentially as the square of the chain's contour length N. This result is obtained within the framework of the Wilemski-Fixman approximation to diffusion-limited reactions, in which the reaction time is expressed as a time correlation function of a ``sink'' term. In the present work, this sink-sink correlation function is calculated using perturbation expansions in the excluded volume and the polymer-surface interactions, with renormalization group methods being used to resum the expansion into a power law form. The quadratic dependence of tau(r) on N mirrors the behavior of the average time tau(c) of a free random walk to cyclize, but contrasts with the cyclization time of a free self-avoiding walk (SAW), for which tau(r) similar to N-2.2. A simulation study by Cheng and Makarov J. Phys. Chem. B 114, 3321 (2010)] of the chain-end reaction time of an SAW on a flat impenetrable surface leads to the same N-2.2 behavior, which is surprising given the reduced conformational space a tethered polymer has to explore in order to react. (C) 2014 AIP Publishing LLC.
Resumo:
Rugged energy landscapes find wide applications in diverse fields ranging from astrophysics to protein folding. We study the dependence of diffusion coefficient (D) of a Brownian particle on the distribution width (epsilon) of randomness in a Gaussian random landscape by simulations and theoretical analysis. We first show that the elegant expression of Zwanzig Proc. Natl. Acad. Sci. U.S.A. 85, 2029 (1988)] for D(epsilon) can be reproduced exactly by using the Rosenfeld diffusion-entropy scaling relation. Our simulations show that Zwanzig's expression overestimates D in an uncorrelated Gaussian random lattice - differing by almost an order of magnitude at moderately high ruggedness. The disparity originates from the presence of ``three-site traps'' (TST) on the landscape - which are formed by the presence of deep minima flanked by high barriers on either side. Using mean first passage time formalism, we derive a general expression for the effective diffusion coefficient in the presence of TST, that quantitatively reproduces the simulation results and which reduces to Zwanzig's form only in the limit of infinite spatial correlation. We construct a continuous Gaussian field with inherent correlation to establish the effect of spatial correlation on random walk. The presence of TSTs at large ruggedness (epsilon >> k(B)T) gives rise to an apparent breakdown of ergodicity of the type often encountered in glassy liquids. (C) 2014 AIP Publishing LLC.
Resumo:
There is a need to use probability distributions with power-law decaying tails to describe the large variations exhibited by some of the physical phenomena. The Weierstrass Random Walk (WRW) shows promise for modeling such phenomena. The theory of anomalous diffusion is now well established. It has found number of applications in Physics, Chemistry and Biology. However, its applications are limited in structural mechanics in general, and structural engineering in particular. The aim of this paper is to present some mathematical preliminaries related to WRW that would help in possible applications. In the limiting case, it represents a diffusion process whose evolution is governed by a fractional partial differential equation. Three applications of superdiffusion processes in mechanics, illustrating their effectiveness in handling large variations, are presented.
Resumo:
We propose a distributed sequential algorithm for quick detection of spectral holes in a Cognitive Radio set up. Two or more local nodes make decisions and inform the fusion centre (FC) over a reporting Multiple Access Channel (MAC), which then makes the final decision. The local nodes use energy detection and the FC uses mean detection in the presence of fading, heavy-tailed electromagnetic interference (EMI) and outliers. The statistics of the primary signal, channel gain and the EMI is not known. Different nonparametric sequential algorithms are compared to choose appropriate algorithms to be used at the local nodes and the Fe. Modification of a recently developed random walk test is selected for the local nodes for energy detection as well as at the fusion centre for mean detection. We show via simulations and analysis that the nonparametric distributed algorithm developed performs well in the presence of fading, EMI and outliers. The algorithm is iterative in nature making the computation and storage requirements minimal.
Resumo:
The time correlations of pressure modes in stationary isotropic turbulence are investigated under the Kraichnan and Tennekes "random sweeping" hypothesis. A simple model is obtained which predicts a universal form for the time correlations. It implies that the decorrelation process of pressure fluctuations in time is mainly dominated by the sweeping velocity, and the pressure correlations have the same decorrelation time scales as the velocity correlations. These results are verified using direct numerical simulations of isotropic turbulence at two moderate Reynolds numbers; the mode correlations collapse to the universal form when the time separations are scaled by wavenumber times the sweeping velocity, and the ratios of the correlation coefficients of pressure modes to those of velocity modes are approximately unity for the entire range of time separation. (c) 2008 American Institute of Physics.
Resumo:
Published as an article in: Studies in Nonlinear Dynamics & Econometrics, 2004, vol. 8, issue 1, pages 5.
Resumo:
In the hybrid approach of large-eddy simulation (LES) and Lighthill’s acoustic analogy for turbulence-generated sound, the turbulence source fields are obtained using an LES and the turbulence-generated sound at far fields is calculated from Lighthill’s acoustic analogy. As only the velocity fields at resolved scales are available from the LES, the Lighthill stress tensor, serving as a source term in Lighthill’s acoustic equation, has to be evaluated from the resolved velocity fields. As a result, the contribution from the unresolved velocity fields is missing in the conventional LES. The sound of missing scales is shown to be important and hence needs to be modeled. The present study proposes a kinematic subgrid-scale (SGS) model which recasts the unresolved velocity fields into Lighthill’s stress tensors. A kinematic simulation is used to construct the unresolved velocity fields with the imposed temporal statistics, which is consistent with the random sweeping hypothesis. The kinematic SGS model is used to calculate sound power spectra from isotropic turbulence and yields an improved result: the missing portion of the sound power spectra is approximately recovered in the LES.
Resumo:
Over the last century, the silicon revolution has enabled us to build faster, smaller and more sophisticated computers. Today, these computers control phones, cars, satellites, assembly lines, and other electromechanical devices. Just as electrical wiring controls electromechanical devices, living organisms employ "chemical wiring" to make decisions about their environment and control physical processes. Currently, the big difference between these two substrates is that while we have the abstractions, design principles, verification and fabrication techniques in place for programming with silicon, we have no comparable understanding or expertise for programming chemistry.
In this thesis we take a small step towards the goal of learning how to systematically engineer prescribed non-equilibrium dynamical behaviors in chemical systems. We use the formalism of chemical reaction networks (CRNs), combined with mass-action kinetics, as our programming language for specifying dynamical behaviors. Leveraging the tools of nucleic acid nanotechnology (introduced in Chapter 1), we employ synthetic DNA molecules as our molecular architecture and toehold-mediated DNA strand displacement as our reaction primitive.
Abstraction, modular design and systematic fabrication can work only with well-understood and quantitatively characterized tools. Therefore, we embark on a detailed study of the "device physics" of DNA strand displacement (Chapter 2). We present a unified view of strand displacement biophysics and kinetics by studying the process at multiple levels of detail, using an intuitive model of a random walk on a 1-dimensional energy landscape, a secondary structure kinetics model with single base-pair steps, and a coarse-grained molecular model that incorporates three-dimensional geometric and steric effects. Further, we experimentally investigate the thermodynamics of three-way branch migration. Our findings are consistent with previously measured or inferred rates for hybridization, fraying, and branch migration, and provide a biophysical explanation of strand displacement kinetics. Our work paves the way for accurate modeling of strand displacement cascades, which would facilitate the simulation and construction of more complex molecular systems.
In Chapters 3 and 4, we identify and overcome the crucial experimental challenges involved in using our general DNA-based technology for engineering dynamical behaviors in the test tube. In this process, we identify important design rules that inform our choice of molecular motifs and our algorithms for designing and verifying DNA sequences for our molecular implementation. We also develop flexible molecular strategies for "tuning" our reaction rates and stoichiometries in order to compensate for unavoidable non-idealities in the molecular implementation, such as imperfectly synthesized molecules and spurious "leak" pathways that compete with desired pathways.
We successfully implement three distinct autocatalytic reactions, which we then combine into a de novo chemical oscillator. Unlike biological networks, which use sophisticated evolved molecules (like proteins) to realize such behavior, our test tube realization is the first to demonstrate that Watson-Crick base pairing interactions alone suffice for oscillatory dynamics. Since our design pipeline is general and applicable to any CRN, our experimental demonstration of a de novo chemical oscillator could enable the systematic construction of CRNs with other dynamic behaviors.
Resumo:
Abundance indices derived from fishery-independent surveys typically exhibit much higher interannual variability than is consistent with the within-survey variance or the life history of a species. This extra variability is essentially observation noise (i.e. measurement error); it probably reflects environmentally driven factors that affect catchability over time. Unfortunately, high observation noise reduces the ability to detect important changes in the underlying population abundance. In our study, a noise-reduction technique for uncorrelated observation noise that is based on autoregressive integrated moving average (ARIMA) time series modeling is investigated. The approach is applied to 18 time series of finfish abundance, which were derived from trawl survey data from the U.S. northeast continental shelf. Although the a priori assumption of a random-walk-plus-uncorrelated-noise model generally yielded a smoothed result that is pleasing to the eye, we recommend that the most appropriate ARIMA model be identified for the observed time series if the smoothed time series will be used for further analysis of the population dynamics of a species.
Resumo:
Em 1828 foi observado um fenômeno no microscópio em que se visualizava minúsculos grãos de pólen mergulhados em um líquido em repouso que mexiam-se de forma aleatória, desenhando um movimento desordenado. A questão era compreender este movimento. Após cerca de 80 anos, Einstein (1905) desenvolveu uma formulação matemática para explicar este fenômeno, tratado por movimento Browniano, teoria cada vez mais desenvolvida em muitas das áreas do conhecimento, inclusive recentemente em modelagem computacional. Objetiva-se pontuar os pressupostos básicos inerentes ao passeio aleatório simples considerando experimentos com e sem problema de valor de contorno para melhor compreensão ao no uso de algoritmos aplicados a problemas computacionais. Foram explicitadas as ferramentas necessárias para aplicação de modelos de simulação do passeio aleatório simples nas três primeiras dimensões do espaço. O interesse foi direcionado tanto para o passeio aleatório simples como para possíveis aplicações para o problema da ruína do jogador e a disseminação de vírus em rede de computadores. Foram desenvolvidos algoritmos do passeio aleatório simples unidimensional sem e com o problema do valor de contorno na plataforma R. Similarmente, implementados para os espaços bidimensionais e tridimensionais,possibilitando futuras aplicações para o problema da disseminação de vírus em rede de computadores e como motivação ao estudo da Equação do Calor, embora necessita um maior embasamento em conceitos da Física e Probabilidade para dar continuidade a tal aplicação.
Resumo:
Even though synchronization in autonomous systems has been observed for over three centuries, reports of systematic experimental studies on synchronized oscillators are limited. Here, we report on observations of internal synchronization in coupled silicon micromechanical oscillators associated with a reduction in the relative phase random walk that is modulated by the magnitude of the reactive coupling force between the oscillators. Additionally, for the first time, a significant improvement in the frequency stability of synchronized micromechanical oscillators is reported. The concept presented here is scalable and could be suitably engineered to establish the basis for a new class of highly precise miniaturized clocks and frequency references. © 2013 American Physical Society.
Resumo:
Traditional approaches to upper body pose estimation using monocular vision rely on complex body models and a large variety of geometric constraints. We argue that this is not ideal and somewhat inelegant as it results in large processing burdens, and instead attempt to incorporate these constraints through priors obtained directly from training data. A prior distribution covering the probability of a human pose occurring is used to incorporate likely human poses. This distribution is obtained offline, by fitting a Gaussian mixture model to a large dataset of recorded human body poses, tracked using a Kinect sensor. We combine this prior information with a random walk transition model to obtain an upper body model, suitable for use within a recursive Bayesian filtering framework. Our model can be viewed as a mixture of discrete Ornstein-Uhlenbeck processes, in that states behave as random walks, but drift towards a set of typically observed poses. This model is combined with measurements of the human head and hand positions, using recursive Bayesian estimation to incorporate temporal information. Measurements are obtained using face detection and a simple skin colour hand detector, trained using the detected face. The suggested model is designed with analytical tractability in mind and we show that the pose tracking can be Rao-Blackwellised using the mixture Kalman filter, allowing for computational efficiency while still incorporating bio-mechanical properties of the upper body. In addition, the use of the proposed upper body model allows reliable three-dimensional pose estimates to be obtained indirectly for a number of joints that are often difficult to detect using traditional object recognition strategies. Comparisons with Kinect sensor results and the state of the art in 2D pose estimation highlight the efficacy of the proposed approach.
Resumo:
This paper presents a direct digital frequency synthesizer (DDFS) with a 16-bit accumulator, a fourth-order phase domain single-stage Delta Sigma interpolator, and a 300-MS/s 12-bit current-steering DAC based on the Q(2) Random Walk switching scheme. The Delta Sigma interpolator is used to reduce the phase truncation error and the ROM size. The implemented fourth-order single-stage Delta Sigma noise shaper reduces the effective phase bits by four and reduces the ROM size by 16 times. The DDFS prototype is fabricated in a 0.35-mu m CMOS technology with active area of 1.11 mm(2) including a 12-bit DAC. The measured DDFS spurious-free dynamic range (SFDR) is greater than 78 dB using a reduced ROM with 8-bit phase, 12-bit amplitude resolution and a size of 0.09 mm(2). The total power consumption of the DDFS is 200)mW with a 3.3-V power supply.