935 resultados para Random Walk Models
Resumo:
To improve percolation modelling on soils the geometrical properties of the pore space must be understood; this includes porosity, particle and pore size distribution and connectivity of the pores. A study was conducted with a soil at different bulk densities based on 3D grey images acquired by X-ray computed tomography. The objective was to analyze the effect in percolation of aspects of pore network geometry and discuss the influence of the grey threshold applied to the images. A model based on random walk algorithms was applied to the images, combining five bulk densities with up to six threshold values per density. This allowed for a dynamical perspective of soil structure in relation to water transport through the inclusion of percolation speed in the analyses. To evaluate separately connectivity and isolate the effect of the grey threshold, a critical value of 35% of porosity was selected for every density. This value was the smallest at which total-percolation walks appeared for the all images of the same porosity and may represent a situation of percolation comparable among bulks densities. This criterion avoided an arbitrary decision in grey thresholds. Besides, a random matrix simulation at 35% of porosity with real images was used to test the existence of pore connectivity as a consequence of a non-random soil structure.
Resumo:
A connectivity function defined by the 3D-Euler number, is a topological indicator and can be related to hydraulic properties (Vogel and Roth, 2001). This study aims to develop connectivity Euler indexes as indicators of the ability of soils for fluid percolation. The starting point was a 3D grey image acquired by X-ray computed tomography of a soil at bulk density of 1.2 mg cm-3. This image was used in the simulation of 40000 particles following a directed random walk algorithms with 7 binarization thresholds. These data consisted of 7 files containing the simulated end points of the 40000 random walks, obtained in Ruiz-Ramos et al. (2010). MATLAB software was used for computing the frequency matrix of the number of particles arriving at every end point of the random walks and their 3D representation.
Resumo:
Introduction Diffusion weighted Imaging (DWI) techniques are able to measure, in vivo and non-invasively, the diffusivity of water molecules inside the human brain. DWI has been applied on cerebral ischemia, brain maturation, epilepsy, multiple sclerosis, etc. [1]. Nowadays, there is a very high availability of these images. DWI allows the identification of brain tissues, so its accurate segmentation is a common initial step for the referred applications. Materials and Methods We present a validation study on automated segmentation of DWI based on the Gaussian mixture and hidden Markov random field models. This methodology is widely solved with iterative conditional modes algorithm, but some studies suggest [2] that graph-cuts (GC) algorithms improve the results when initialization is not close to the final solution. We implemented a segmentation tool integrating ITK with a GC algorithm [3], and a validation software using fuzzy overlap measures [4]. Results Segmentation accuracy of each tool is tested against a gold-standard segmentation obtained from a T1 MPRAGE magnetic resonance image of the same subject, registered to the DWI space. The proposed software shows meaningful improvements by using the GC energy minimization approach on DTI and DSI (Diffusion Spectrum Imaging) data. Conclusions The brain tissues segmentation on DWI is a fundamental step on many applications. Accuracy and robustness improvements are achieved with the proposed software, with high impact on the application’s final result.
Resumo:
We present direct-drive target design studies for the laser mégajoule using two distinct initial aspect ratios (A = 34 and A = 5). Laser pulse shapes are optimized by a random walk method and drive power variations are used to cover a wide variety of implosion velocities between 260 km/s and 365 km/s. For selected implosion velocities and for each initial aspect ratio, scaled-target families are built in order to find self-ignition threshold. High-gain shock ignition is also investigated in the context of Laser MégaJoule for marginally igniting targets below their own self-ignition threshold.
Resumo:
*************************************************************************************** EL WCTR es un Congreso de reconocido prestigio internacional en el ámbito de la investigación del transporte que hasta el 2010 publicaba sus libros de abstracts con ISBN. Por ello consideramos que debería seguir teníendose en cuenta para los indicadores de calidad ******************************************************************************************* Investment projects in the field of transportation infrastructures have a high degree of uncertainty and require an important amount of resources. In highway concessions in particular, the calculation of the Net Present Value (NPV) of the project by means of the discount of cash flows, may lead to erroneous results when the project incorporates certain flexibility. In these cases, the theory of real options is an alternative tool for the valuation of concessions. When the variable that generates uncertainty (in our case, the traffic) follows a random walk (or Geometric Brownian Motion), we can calculate the value of the options embedded in the contract starting directly from the process followed by that variable. This procedure notably simplifies the calculation method. In order to test the hypothesis of the evolution of traffic as a Geometric Brownian Motion, we have used the available series of traffic in Spanish highways, and we have applied the Augmented Dickey-Fuller approach, which is the most widely used test for this kind of study. The main result of the analysis is that we cannot reject the hypothesis that traffic follows a Geometric Brownian Motion in the majority of both toll highways and free highways in Spain.
Resumo:
Monte Carlo (MC) methods are widely used in signal processing, machine learning and stochastic optimization. A well-known class of MC methods are Markov Chain Monte Carlo (MCMC) algorithms. In this work, we introduce a novel parallel interacting MCMC scheme, where the parallel chains share information using another MCMC technique working on the entire population of current states. These parallel ?vertical? chains are led by random-walk proposals, whereas the ?horizontal? MCMC uses a independent proposal, which can be easily adapted by making use of all the generated samples. Numerical results show the advantages of the proposed sampling scheme in terms of mean absolute error, as well as robustness w.r.t. to initial values and parameter choice.
Resumo:
Los hipergrafos dirigidos se han empleado en problemas relacionados con lógica proposicional, bases de datos relacionales, linguística computacional y aprendizaje automático. Los hipergrafos dirigidos han sido también utilizados como alternativa a los grafos (bipartitos) dirigidos para facilitar el estudio de las interacciones entre componentes de sistemas complejos que no pueden ser fácilmente modelados usando exclusivamente relaciones binarias. En este contexto, este tipo de representación es conocida como hiper-redes. Un hipergrafo dirigido es una generalización de un grafo dirigido especialmente adecuado para la representación de relaciones de muchos a muchos. Mientras que una arista en un grafo dirigido define una relación entre dos de sus nodos, una hiperarista en un hipergrafo dirigido define una relación entre dos conjuntos de sus nodos. La conexión fuerte es una relación de equivalencia que divide el conjunto de nodos de un hipergrafo dirigido en particiones y cada partición define una clase de equivalencia conocida como componente fuertemente conexo. El estudio de los componentes fuertemente conexos de un hipergrafo dirigido puede ayudar a conseguir una mejor comprensión de la estructura de este tipo de hipergrafos cuando su tamaño es considerable. En el caso de grafo dirigidos, existen algoritmos muy eficientes para el cálculo de los componentes fuertemente conexos en grafos de gran tamaño. Gracias a estos algoritmos, se ha podido averiguar que la estructura de la WWW tiene forma de “pajarita”, donde más del 70% del los nodos están distribuidos en tres grandes conjuntos y uno de ellos es un componente fuertemente conexo. Este tipo de estructura ha sido también observada en redes complejas en otras áreas como la biología. Estudios de naturaleza similar no han podido ser realizados en hipergrafos dirigidos porque no existe algoritmos capaces de calcular los componentes fuertemente conexos de este tipo de hipergrafos. En esta tesis doctoral, hemos investigado como calcular los componentes fuertemente conexos de un hipergrafo dirigido. En concreto, hemos desarrollado dos algoritmos para este problema y hemos determinado que son correctos y cuál es su complejidad computacional. Ambos algoritmos han sido evaluados empíricamente para comparar sus tiempos de ejecución. Para la evaluación, hemos producido una selección de hipergrafos dirigidos generados de forma aleatoria inspirados en modelos muy conocidos de grafos aleatorios como Erdos-Renyi, Newman-Watts-Strogatz and Barabasi-Albert. Varias optimizaciones para ambos algoritmos han sido implementadas y analizadas en la tesis. En concreto, colapsar los componentes fuertemente conexos del grafo dirigido que se puede construir eliminando ciertas hiperaristas complejas del hipergrafo dirigido original, mejora notablemente los tiempos de ejecucion de los algoritmos para varios de los hipergrafos utilizados en la evaluación. Aparte de los ejemplos de aplicación mencionados anteriormente, los hipergrafos dirigidos han sido también empleados en el área de representación de conocimiento. En concreto, este tipo de hipergrafos se han usado para el cálculo de módulos de ontologías. Una ontología puede ser definida como un conjunto de axiomas que especifican formalmente un conjunto de símbolos y sus relaciones, mientras que un modulo puede ser entendido como un subconjunto de axiomas de la ontología que recoge todo el conocimiento que almacena la ontología sobre un conjunto especifico de símbolos y sus relaciones. En la tesis nos hemos centrado solamente en módulos que han sido calculados usando la técnica de localidad sintáctica. Debido a que las ontologías pueden ser muy grandes, el cálculo de módulos puede facilitar las tareas de re-utilización y mantenimiento de dichas ontologías. Sin embargo, analizar todos los posibles módulos de una ontología es, en general, muy costoso porque el numero de módulos crece de forma exponencial con respecto al número de símbolos y de axiomas de la ontología. Afortunadamente, los axiomas de una ontología pueden ser divididos en particiones conocidas como átomos. Cada átomo representa un conjunto máximo de axiomas que siempre aparecen juntos en un modulo. La decomposición atómica de una ontología es definida como un grafo dirigido de tal forma que cada nodo del grafo corresponde con un átomo y cada arista define una dependencia entre una pareja de átomos. En esta tesis introducimos el concepto de“axiom dependency hypergraph” que generaliza el concepto de descomposición atómica de una ontología. Un modulo en una ontología correspondería con un componente conexo en este tipo de hipergrafos y un átomo de una ontología con un componente fuertemente conexo. Hemos adaptado la implementación de nuestros algoritmos para que funcionen también con axiom dependency hypergraphs y poder de esa forma calcular los átomos de una ontología. Para demostrar la viabilidad de esta idea, hemos incorporado nuestros algoritmos en una aplicación que hemos desarrollado para la extracción de módulos y la descomposición atómica de ontologías. A la aplicación la hemos llamado HyS y hemos estudiado sus tiempos de ejecución usando una selección de ontologías muy conocidas del área biomédica, la mayoría disponibles en el portal de Internet NCBO. Los resultados de la evaluación muestran que los tiempos de ejecución de HyS son mucho mejores que las aplicaciones más rápidas conocidas. ABSTRACT Directed hypergraphs are an intuitive modelling formalism that have been used in problems related to propositional logic, relational databases, computational linguistic and machine learning. Directed hypergraphs are also presented as an alternative to directed (bipartite) graphs to facilitate the study of the interactions between components of complex systems that cannot naturally be modelled as binary relations. In this context, they are known as hyper-networks. A directed hypergraph is a generalization of a directed graph suitable for representing many-to-many relationships. While an edge in a directed graph defines a relation between two nodes of the graph, a hyperedge in a directed hypergraph defines a relation between two sets of nodes. Strong-connectivity is an equivalence relation that induces a partition of the set of nodes of a directed hypergraph into strongly-connected components. These components can be collapsed into single nodes. As result, the size of the original hypergraph can significantly be reduced if the strongly-connected components have many nodes. This approach might contribute to better understand how the nodes of a hypergraph are connected, in particular when the hypergraphs are large. In the case of directed graphs, there are efficient algorithms that can be used to compute the strongly-connected components of large graphs. For instance, it has been shown that the macroscopic structure of the World Wide Web can be represented as a “bow-tie” diagram where more than 70% of the nodes are distributed into three large sets and one of these sets is a large strongly-connected component. This particular structure has been also observed in complex networks in other fields such as, e.g., biology. Similar studies cannot be conducted in a directed hypergraph because there does not exist any algorithm for computing the strongly-connected components of the hypergraph. In this thesis, we investigate ways to compute the strongly-connected components of directed hypergraphs. We present two new algorithms and we show their correctness and computational complexity. One of these algorithms is inspired by Tarjan’s algorithm for directed graphs. The second algorithm follows a simple approach to compute the stronglyconnected components. This approach is based on the fact that two nodes of a graph that are strongly-connected can also reach the same nodes. In other words, the connected component of each node is the same. Both algorithms are empirically evaluated to compare their performances. To this end, we have produced a selection of random directed hypergraphs inspired by existent and well-known random graphs models like Erd˝os-Renyi and Newman-Watts-Strogatz. Besides the application examples that we mentioned earlier, directed hypergraphs have also been employed in the field of knowledge representation. In particular, they have been used to compute the modules of an ontology. An ontology is defined as a collection of axioms that provides a formal specification of a set of terms and their relationships; and a module is a subset of an ontology that completely captures the meaning of certain terms as defined in the ontology. In particular, we focus on the modules computed using the notion of syntactic locality. As ontologies can be very large, the computation of modules facilitates the reuse and maintenance of these ontologies. Analysing all modules of an ontology, however, is in general not feasible as the number of modules grows exponentially in the number of terms and axioms of the ontology. Nevertheless, the modules can succinctly be represented using the Atomic Decomposition of an ontology. Using this representation, an ontology can be partitioned into atoms, which are maximal sets of axioms that co-occur in every module. The Atomic Decomposition is then defined as a directed graph such that each node correspond to an atom and each edge represents a dependency relation between two atoms. In this thesis, we introduce the notion of an axiom dependency hypergraph which is a generalization of the atomic decomposition of an ontology. A module in the ontology corresponds to a connected component in the hypergraph, and the atoms of the ontology to the strongly-connected components. We apply our algorithms for directed hypergraphs to axiom dependency hypergraphs and in this manner, we compute the atoms of an ontology. To demonstrate the viability of this approach, we have implemented the algorithms in the application HyS which computes the modules of ontologies and calculate their atomic decomposition. In the thesis, we provide an experimental evaluation of HyS with a selection of large and prominent biomedical ontologies, most of which are available in the NCBO Bioportal. HyS outperforms state-of-the-art implementations in the tasks of extracting modules and computing the atomic decomposition of these ontologies.
Resumo:
We explore charge migration in DNA, advancing two distinct mechanisms of charge separation in a donor (d)–bridge ({Bj})–acceptor (a) system, where {Bj} = B1,B2, … , BN are the N-specific adjacent bases of B-DNA: (i) two-center unistep superexchange induced charge transfer, d*{Bj}a → d∓{Bj}a±, and (ii) multistep charge transport involves charge injection from d* (or d+) to {Bj}, charge hopping within {Bj}, and charge trapping by a. For off-resonance coupling, mechanism i prevails with the charge separation rate and yield exhibiting an exponential dependence ∝ exp(−βR) on the d-a distance (R). Resonance coupling results in mechanism ii with the charge separation lifetime τ ∝ Nη and yield Y ≃ (1 + δ̄ Nη)−1 exhibiting a weak (algebraic) N and distance dependence. The power parameter η is determined by charge hopping random walk. Energetic control of the charge migration mechanism is exerted by the energetics of the ion pair state d∓B1±B2 … BNa relative to the electronically excited donor doorway state d*B1B2 … BNa. The realization of charge separation via superexchange or hopping is determined by the base sequence within the bridge. Our energetic–dynamic relations, in conjunction with the energetic data for d*/d− and for B/B+, determine the realization of the two distinct mechanisms in different hole donor systems, establishing the conditions for “chemistry at a distance” after charge transport in DNA. The energetic control of the charge migration mechanisms attained by the sequence specificity of the bridge is universal for large molecular-scale systems, for proteins, and for DNA.
Resumo:
DNA and other biopolymers differ from classical polymers because of their torsional stiffness. This property changes the statistical character of their conformations under tension from a classical random walk to a problem we call the “torsional directed walk.” Motivated by a recent experiment on single lambda-DNA molecules [Strick, T. R., Allemand, J.-F., Bensimon, D., Bensimon, A. & Croquette, V. (1996) Science 271, 1835–1837], we formulate the torsional directed walk problem and solve it analytically in the appropriate force regime. Our technique affords a direct physical determination of the microscopic twist stiffness C and twist-stretch coupling D relevant for DNA functionality. The theory quantitatively fits existing experimental data for relative extension as a function of overtwist over a wide range of applied force; fitting to the experimental data yields the numerical values C = 120 nm and D = 50 nm. Future experiments will refine these values. We also predict that the phenomenon of reduction of effective twist stiffness by bend fluctuations should be testable in future single-molecule experiments, and we give its analytic form.
Resumo:
How do secretory proteins and other cargo targeted to post-Golgi locations traverse the Golgi stack? We report immunoelectron microscopy experiments establishing that a Golgi-restricted SNARE, GOS 28, is present in the same population of COPI vesicles as anterograde cargo marked by vesicular stomatitis virus glycoprotein, but is excluded from the COPI vesicles containing retrograde-targeted cargo (marked by KDEL receptor). We also report that GOS 28 and its partnering t-SNARE heavy chain, syntaxin 5, reside together in every cisterna of the stack. Taken together, these data raise the possibility that the anterograde cargo-laden COPI vesicles, retained locally by means of tethers, are inherently capable of fusing with neighboring cisternae on either side. If so, quanta of exported proteins would transit the stack in GOS 28–COPI vesicles via a bidirectional random walk, entering at the cis face and leaving at the trans face and percolating up and down the stack in between. Percolating vesicles carrying both post-Golgi cargo and Golgi residents up and down the stack would reconcile disparate observations on Golgi transport in cells and in cell-free systems.
Resumo:
Most large dynamical systems are thought to have ergodic dynamics, whereas small systems may not have free interchange of energy between degrees of freedom. This assumption is made in many areas of chemistry and physics, ranging from nuclei to reacting molecules and on to quantum dots. We examine the transition to facile vibrational energy flow in a large set of organic molecules as molecular size is increased. Both analytical and computational results based on local random matrix models describe the transition to unrestricted vibrational energy flow in these molecules. In particular, the models connect the number of states participating in intramolecular energy flow to simple molecular properties such as the molecular size and the distribution of vibrational frequencies. The transition itself is governed by a local anharmonic coupling strength and a local state density. The theoretical results for the transition characteristics compare well with those implied by experimental measurements using IR fluorescence spectroscopy of dilution factors reported by Stewart and McDonald [Stewart, G. M. & McDonald, J. D. (1983) J. Chem. Phys. 78, 3907–3915].
Resumo:
The question of whether proteins originate from random sequences of amino acids is addressed. A statistical analysis is performed in terms of blocked and random walk values formed by binary hydrophobic assignments of the amino acids along the protein chains. Theoretical expectations of these variables from random distributions of hydrophobicities are compared with those obtained from functional proteins. The results, which are based upon proteins in the SWISS-PROT data base, convincingly show that the amino acid sequences in proteins differ from what is expected from random sequences in a statistically significant way. By performing Fourier transforms on the random walks, one obtains additional evidence for nonrandomness of the distributions. We have also analyzed results from a synthetic model containing only two amino acid types, hydrophobic and hydrophilic. With reasonable criteria on good folding properties in terms of thermodynamical and kinetic behavior, sequences that fold well are isolated. Performing the same statistical analysis on the sequences that fold well indicates similar deviations from randomness as for the functional proteins. The deviations from randomness can be interpreted as originating from anticorrelations in terms of an Ising spin model for the hydrophobicities. Our results, which differ from some previous investigations using other methods, might have impact on how permissive with respect to sequence specificity protein folding process is-only sequences with nonrandom hydrophobicity distributions fold well. Other distributions give rise to energy landscapes with poor folding properties and hence did not survive the evolution.
Resumo:
Is the pathway of protein folding determined by the relative stability of folding intermediates, or by the relative height of the activation barriers leading to these intermediates? This is a fundamental question for resolving the Levinthal paradox, which stated that protein folding by a random search mechanism would require a time too long to be plausible. To answer this question, we have studied the guanidinium chloride (GdmCl)-induced folding/unfolding of staphylococcal nuclease [(SNase, formerly EC 3.1.4.7; now called microbial nuclease or endonuclease, EC 3.1.31.1] by stopped-flow circular dichroism (CD) and differential scanning microcalorimetry (DSC). The data show that while the equilibrium transition is a quasi-two-state process, kinetics in the 2-ms to 500-s time range are triphasic. Data support the sequential mechanism for SNase folding: U3 <--> U2 <--> U1 <--> N0, where U1, U2, and U3 are substates of the unfolded protein and N0 is the native state. Analysis of the relative population of the U1, U2, and U3 species in 2.0 M GdmCl gives delta-G values for the U3 --> U2 reaction of +0.1 kcal/mol and for the U2 --> U1 reaction of -0.49 kcal/mol. The delta-G value for the U1 --> N0 reaction is calculated to be -4.5 kcal/mol from DSC data. The activation energy, enthalpy, and entropy for each kinetic step are also determined. These results allow us to make the following four conclusions. (i) Although the U1, U2, and U3 states are nearly isoenergetic, no random walk occurs among them during the folding. The pathway of folding is unique and sequential. In other words, the relative stability of the folding intermediates does not dictate the folding pathway. Instead, the folding is a descent toward the global free-energy minimum of the native state via the least activation path in the vast energy landscape. Barrier avoidance leads the way, and barrier height limits the rate. Thus, the Levinthal paradox is not applicable to the protein-folding problem. (ii) The main folding reaction (U1 --> N0), in which the peptide chain acquires most of its free energy (via van der Waals' contacts, hydrogen bonding, and electrostatic interactions), is a highly concerted process. These energy-acquiring events take place in a single kinetic phase. (iii) U1 appears to be a compact unfolded species; the rate of conversion of U2 to U1 depends on the viscosity of solution. (iv) All four relaxation times reported here depend on GdmCl concentrations: it is likely that none involve the cis/trans isomerization of prolines. Finally, a mechanism is presented in which formation of sheet-like chain conformations and a hydrophobic condensation event precede the main-chain folding reaction.
Resumo:
Background: The relationship between deprivation and mortality in urban settings is well established. This relationship has been found for several causes of death in Spanish cities in independent analyses (the MEDEA project). However, no joint analysis which pools the strength of this relationship across several cities has ever been undertaken. Such an analysis would determine, if appropriate, a joint relationship by linking the associations found. Methods: A pooled cross-sectional analysis of the data from the MEDEA project has been carried out for each of the causes of death studied. Specifically, a meta-analysis has been carried out to pool the relative risks in eleven Spanish cities. Different deprivation-mortality relationships across the cities are considered in the analysis (fixed and random effects models). The size of the cities is also considered as a possible factor explaining differences between cities. Results: Twenty studies have been carried out for different combinations of sex and causes of death. For nine of them (men: prostate cancer, diabetes, mental illnesses, Alzheimer’s disease, cerebrovascular disease; women: diabetes, mental illnesses, respiratory diseases, cirrhosis) no differences were found between cities in the effect of deprivation on mortality; in four cases (men: respiratory diseases, all causes of mortality; women: breast cancer, Alzheimer’s disease) differences not associated with the size of the city have been determined; in two cases (men: cirrhosis; women: lung cancer) differences strictly linked to the size of the city have been determined, and in five cases (men: lung cancer, ischaemic heart disease; women: ischaemic heart disease, cerebrovascular diseases, all causes of mortality) both kinds of differences have been found. Except for lung cancer in women, every significant relationship between deprivation and mortality goes in the same direction: deprivation increases mortality. Variability in the relative risks across cities was found for general mortality for both sexes. Conclusions: This study provides a general overview of the relationship between deprivation and mortality for a sample of large Spanish cities combined. This joint study allows the exploration of and, if appropriate, the quantification of the variability in that relationship for the set of cities considered.
Resumo:
We propose and discuss a new centrality index for urban street patterns represented as networks in geographical space. This centrality measure, that we call ranking-betweenness centrality, combines the idea behind the random-walk betweenness centrality measure and the idea of ranking the nodes of a network produced by an adapted PageRank algorithm. We initially use a PageRank algorithm in which we are able to transform some information of the network that we want to analyze into numerical values. Numerical values summarizing the information are associated to each of the nodes by means of a data matrix. After running the adapted PageRank algorithm, a ranking of the nodes is obtained, according to their importance in the network. This classification is the starting point for applying an algorithm based on the random-walk betweenness centrality. A detailed example of a real urban street network is discussed in order to understand the process to evaluate the ranking-betweenness centrality proposed, performing some comparisons with other classical centrality measures.