908 resultados para Random Graph


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A 2D computer simulation method of random packings is applied to sets of particles generated by a self-similar uniparametric model for particle size distributions (PSDs) in granular media. The parameter p which controls the model is the proportion of mass of particles corresponding to the left half of the normalized size interval [0,1]. First the influence on the total porosity of the parameter p is analyzed and interpreted. It is shown that such parameter, and the fractal exponent of the associated power scaling, are efficient packing parameters, but this last one is not in the way predicted in a former published work addressing an analogous research in artificial granular materials. The total porosity reaches the minimum value for p = 0.6. Limited information on the pore size distribution is obtained from the packing simulations and by means of morphological analysis methods. Results show that the range of pore sizes increases for decreasing values of p showing also different shape in the volume pore size distribution. Further research including simulations with a greater number of particles and image resolution are required to obtain finer results on the hierarchical structure of pore space.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Con el auge del Cloud Computing, las aplicaciones de proceso de datos han sufrido un incremento de demanda, y por ello ha cobrado importancia lograr m�ás eficiencia en los Centros de Proceso de datos. El objetivo de este trabajo es la obtenci�ón de herramientas que permitan analizar la viabilidad y rentabilidad de diseñar Centros de Datos especializados para procesamiento de datos, con una arquitectura, sistemas de refrigeraci�ón, etc. adaptados. Algunas aplicaciones de procesamiento de datos se benefician de las arquitecturas software, mientras que en otras puede ser m�ás eficiente un procesamiento con arquitectura hardware. Debido a que ya hay software con muy buenos resultados en el procesamiento de grafos, como el sistema XPregel, en este proyecto se realizará una arquitectura hardware en VHDL, implementando el algoritmo PageRank de Google de forma escalable. Se ha escogido este algoritmo ya que podr��á ser m�ás eficiente en arquitectura hardware, debido a sus características concretas que se indicaráan m�ás adelante. PageRank sirve para ordenar las p�áginas por su relevancia en la web, utilizando para ello la teorí��a de grafos, siendo cada página web un vértice de un grafo; y los enlaces entre páginas, las aristas del citado grafo. En este proyecto, primero se realizará un an�álisis del estado de la técnica. Se supone que la implementaci�ón en XPregel, un sistema de procesamiento de grafos, es una de las m�ás eficientes. Por ello se estudiará esta �ultima implementaci�ón. Sin embargo, debido a que Xpregel procesa, en general, algoritmos que trabajan con grafos; no tiene en cuenta ciertas caracterí��sticas del algoritmo PageRank, por lo que la implementaci�on no es �optima. Esto es debido a que en PageRank, almacenar todos los datos que manda un mismo v�értice es un gasto innecesario de memoria ya que todos los mensajes que manda un vértice son iguales entre sí e iguales a su PageRank. Se realizará el diseño en VHDL teniendo en cuenta esta caracter��ística del citado algoritmo,evitando almacenar varias veces los mensajes que son iguales. Se ha elegido implementar PageRank en VHDL porque actualmente las arquitecturas de los sistemas operativos no escalan adecuadamente. Se busca evaluar si con otra arquitectura se obtienen mejores resultados. Se realizará un diseño partiendo de cero, utilizando la memoria ROM de IPcore de Xillinx (Software de desarrollo en VHDL), generada autom�áticamente. Se considera hacer cuatro tipos de módulos para que as�� el procesamiento se pueda hacer en paralelo. Se simplificar�á la estructura de XPregel con el fin de intentar aprovechar la particularidad de PageRank mencionada, que hace que XPregel no le saque el m�aximo partido. Despu�és se escribirá el c�ódigo, realizando una estructura escalable, ya que en la computación intervienen millones de páginas web. A continuación, se sintetizar�á y se probará el código en una FPGA. El �ultimo paso será una evaluaci�ón de la implementaci�ón, y de posibles mejoras en cuanto al consumo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to accurately observe the Earth's carbon cycles from space gives scientists an important tool to analyze climate change. Current space-borne Integrated-Path Differential Absorption (IPDA) Iidar concepts have the potential to meet this need. They are mainly based on the pulsed time-offlight principle, in which two high energy pulses of different wavelengths interrogate the atmosphere for its transmission properties and are backscattered by the ground. In this paper, feasibility study results of a Pseudo-Random Single Photon Counting (PRSPC) IPDA lidar are reported. The proposed approach replaces the high energy pulsed source (e.g. a solidstate laser), with a semiconductor laser in CW operation with a similar average power of a few Watts, benefiting from better efficiency and reliability. The auto-correlation property of Pseudo-Random Binary Sequence (PRBS) and temporal shifting of the codes can be utilized to transmit both wavelengths simultaneously, avoiding the beam misalignment problem experienced by pulsed techniques. The envelope signal to noise ratio has been analyzed, and various system parameters have been selected. By restricting the telescopes field-of-view, the dominant noise source of ambient light can be suppressed, and in addition with a low noise single photon counting detector, a retrieval precision of 1.5 ppm over 50 km along-track averaging could be attained. We also describe preliminary experimental results involving a negative feedback Indium Gallium Arsenide (InGaAs) single photon avalanche photodiode and a low power Distributed Feedback laser diode modulated with PRBS driven acoustic optical modulator. The results demonstrate that higher detector saturation count rates will be needed for use in future spacebourne missions but measurement linearity and precision should meet the stringent requirements set out by future Earthobserving missions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we report on the progresses of the BRITESPACE Consortium in order to achieve space-borne LIDAR measurements of atmospheric carbon dioxide concentration based on an all semiconductor laser source at 1.57 ?m. The complete design of the proposed RM-CW IPDA LIDAR has been presented and described in detail. Complete descriptions of the laser module and the FSU have been presented. Two bended MOPAs, emitting at the sounding frequency of the on- and off- IPDA channels, have been proposed as the transmitter optical sources with the required high brightness. Experimental results on the bended MOPAs have been presented showing a high spectral purity and promising expectations on the high output power requirements. Finally, the RM-CW approach has been modelled and an estimation of the expected SNR for the entire system is presented. Preliminary results indicate that a CO2 retrieval precision of 1.5 ppm could be achieved with an average output power of 2 W for each channel.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Los hipergrafos dirigidos se han empleado en problemas relacionados con lógica proposicional, bases de datos relacionales, linguística computacional y aprendizaje automático. Los hipergrafos dirigidos han sido también utilizados como alternativa a los grafos (bipartitos) dirigidos para facilitar el estudio de las interacciones entre componentes de sistemas complejos que no pueden ser fácilmente modelados usando exclusivamente relaciones binarias. En este contexto, este tipo de representación es conocida como hiper-redes. Un hipergrafo dirigido es una generalización de un grafo dirigido especialmente adecuado para la representación de relaciones de muchos a muchos. Mientras que una arista en un grafo dirigido define una relación entre dos de sus nodos, una hiperarista en un hipergrafo dirigido define una relación entre dos conjuntos de sus nodos. La conexión fuerte es una relación de equivalencia que divide el conjunto de nodos de un hipergrafo dirigido en particiones y cada partición define una clase de equivalencia conocida como componente fuertemente conexo. El estudio de los componentes fuertemente conexos de un hipergrafo dirigido puede ayudar a conseguir una mejor comprensión de la estructura de este tipo de hipergrafos cuando su tamaño es considerable. En el caso de grafo dirigidos, existen algoritmos muy eficientes para el cálculo de los componentes fuertemente conexos en grafos de gran tamaño. Gracias a estos algoritmos, se ha podido averiguar que la estructura de la WWW tiene forma de “pajarita”, donde más del 70% del los nodos están distribuidos en tres grandes conjuntos y uno de ellos es un componente fuertemente conexo. Este tipo de estructura ha sido también observada en redes complejas en otras áreas como la biología. Estudios de naturaleza similar no han podido ser realizados en hipergrafos dirigidos porque no existe algoritmos capaces de calcular los componentes fuertemente conexos de este tipo de hipergrafos. En esta tesis doctoral, hemos investigado como calcular los componentes fuertemente conexos de un hipergrafo dirigido. En concreto, hemos desarrollado dos algoritmos para este problema y hemos determinado que son correctos y cuál es su complejidad computacional. Ambos algoritmos han sido evaluados empíricamente para comparar sus tiempos de ejecución. Para la evaluación, hemos producido una selección de hipergrafos dirigidos generados de forma aleatoria inspirados en modelos muy conocidos de grafos aleatorios como Erdos-Renyi, Newman-Watts-Strogatz and Barabasi-Albert. Varias optimizaciones para ambos algoritmos han sido implementadas y analizadas en la tesis. En concreto, colapsar los componentes fuertemente conexos del grafo dirigido que se puede construir eliminando ciertas hiperaristas complejas del hipergrafo dirigido original, mejora notablemente los tiempos de ejecucion de los algoritmos para varios de los hipergrafos utilizados en la evaluación. Aparte de los ejemplos de aplicación mencionados anteriormente, los hipergrafos dirigidos han sido también empleados en el área de representación de conocimiento. En concreto, este tipo de hipergrafos se han usado para el cálculo de módulos de ontologías. Una ontología puede ser definida como un conjunto de axiomas que especifican formalmente un conjunto de símbolos y sus relaciones, mientras que un modulo puede ser entendido como un subconjunto de axiomas de la ontología que recoge todo el conocimiento que almacena la ontología sobre un conjunto especifico de símbolos y sus relaciones. En la tesis nos hemos centrado solamente en módulos que han sido calculados usando la técnica de localidad sintáctica. Debido a que las ontologías pueden ser muy grandes, el cálculo de módulos puede facilitar las tareas de re-utilización y mantenimiento de dichas ontologías. Sin embargo, analizar todos los posibles módulos de una ontología es, en general, muy costoso porque el numero de módulos crece de forma exponencial con respecto al número de símbolos y de axiomas de la ontología. Afortunadamente, los axiomas de una ontología pueden ser divididos en particiones conocidas como átomos. Cada átomo representa un conjunto máximo de axiomas que siempre aparecen juntos en un modulo. La decomposición atómica de una ontología es definida como un grafo dirigido de tal forma que cada nodo del grafo corresponde con un átomo y cada arista define una dependencia entre una pareja de átomos. En esta tesis introducimos el concepto de“axiom dependency hypergraph” que generaliza el concepto de descomposición atómica de una ontología. Un modulo en una ontología correspondería con un componente conexo en este tipo de hipergrafos y un átomo de una ontología con un componente fuertemente conexo. Hemos adaptado la implementación de nuestros algoritmos para que funcionen también con axiom dependency hypergraphs y poder de esa forma calcular los átomos de una ontología. Para demostrar la viabilidad de esta idea, hemos incorporado nuestros algoritmos en una aplicación que hemos desarrollado para la extracción de módulos y la descomposición atómica de ontologías. A la aplicación la hemos llamado HyS y hemos estudiado sus tiempos de ejecución usando una selección de ontologías muy conocidas del área biomédica, la mayoría disponibles en el portal de Internet NCBO. Los resultados de la evaluación muestran que los tiempos de ejecución de HyS son mucho mejores que las aplicaciones más rápidas conocidas. ABSTRACT Directed hypergraphs are an intuitive modelling formalism that have been used in problems related to propositional logic, relational databases, computational linguistic and machine learning. Directed hypergraphs are also presented as an alternative to directed (bipartite) graphs to facilitate the study of the interactions between components of complex systems that cannot naturally be modelled as binary relations. In this context, they are known as hyper-networks. A directed hypergraph is a generalization of a directed graph suitable for representing many-to-many relationships. While an edge in a directed graph defines a relation between two nodes of the graph, a hyperedge in a directed hypergraph defines a relation between two sets of nodes. Strong-connectivity is an equivalence relation that induces a partition of the set of nodes of a directed hypergraph into strongly-connected components. These components can be collapsed into single nodes. As result, the size of the original hypergraph can significantly be reduced if the strongly-connected components have many nodes. This approach might contribute to better understand how the nodes of a hypergraph are connected, in particular when the hypergraphs are large. In the case of directed graphs, there are efficient algorithms that can be used to compute the strongly-connected components of large graphs. For instance, it has been shown that the macroscopic structure of the World Wide Web can be represented as a “bow-tie” diagram where more than 70% of the nodes are distributed into three large sets and one of these sets is a large strongly-connected component. This particular structure has been also observed in complex networks in other fields such as, e.g., biology. Similar studies cannot be conducted in a directed hypergraph because there does not exist any algorithm for computing the strongly-connected components of the hypergraph. In this thesis, we investigate ways to compute the strongly-connected components of directed hypergraphs. We present two new algorithms and we show their correctness and computational complexity. One of these algorithms is inspired by Tarjan’s algorithm for directed graphs. The second algorithm follows a simple approach to compute the stronglyconnected components. This approach is based on the fact that two nodes of a graph that are strongly-connected can also reach the same nodes. In other words, the connected component of each node is the same. Both algorithms are empirically evaluated to compare their performances. To this end, we have produced a selection of random directed hypergraphs inspired by existent and well-known random graphs models like Erd˝os-Renyi and Newman-Watts-Strogatz. Besides the application examples that we mentioned earlier, directed hypergraphs have also been employed in the field of knowledge representation. In particular, they have been used to compute the modules of an ontology. An ontology is defined as a collection of axioms that provides a formal specification of a set of terms and their relationships; and a module is a subset of an ontology that completely captures the meaning of certain terms as defined in the ontology. In particular, we focus on the modules computed using the notion of syntactic locality. As ontologies can be very large, the computation of modules facilitates the reuse and maintenance of these ontologies. Analysing all modules of an ontology, however, is in general not feasible as the number of modules grows exponentially in the number of terms and axioms of the ontology. Nevertheless, the modules can succinctly be represented using the Atomic Decomposition of an ontology. Using this representation, an ontology can be partitioned into atoms, which are maximal sets of axioms that co-occur in every module. The Atomic Decomposition is then defined as a directed graph such that each node correspond to an atom and each edge represents a dependency relation between two atoms. In this thesis, we introduce the notion of an axiom dependency hypergraph which is a generalization of the atomic decomposition of an ontology. A module in the ontology corresponds to a connected component in the hypergraph, and the atoms of the ontology to the strongly-connected components. We apply our algorithms for directed hypergraphs to axiom dependency hypergraphs and in this manner, we compute the atoms of an ontology. To demonstrate the viability of this approach, we have implemented the algorithms in the application HyS which computes the modules of ontologies and calculate their atomic decomposition. In the thesis, we provide an experimental evaluation of HyS with a selection of large and prominent biomedical ontologies, most of which are available in the NCBO Bioportal. HyS outperforms state-of-the-art implementations in the tasks of extracting modules and computing the atomic decomposition of these ontologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel pedestrian motion prediction technique is presented in this paper. Its main achievement regards to none previous observation, any knowledge of pedestrian trajectories nor the existence of possible destinations is required; hence making it useful for autonomous surveillance applications. Prediction only requires initial position of the pedestrian and a 2D representation of the scenario as occupancy grid. First, it uses the Fast Marching Method (FMM) to calculate the pedestrian arrival time for each position in the map and then, the likelihood that the pedestrian reaches those positions is estimated. The technique has been tested with synthetic and real scenarios. In all cases, accurate probability maps as well as their representative graphs were obtained with low computational cost.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent experimental data on the conductivity σ+(T), T → 0, on the metallic side of the metal–insulator transition in ideally random (neutron transmutation-doped) 70Ge:Ga have shown that σ+(0) ∝ (N − Nc)μ with μ = ½, confirming earlier ultra-low-temperature results for Si:P. This value is inconsistent with theoretical predictions based on diffusive classical scaling models, but it can be understood by a quantum-directed percolative filamentary amplitude model in which electronic basis states exist which have a well-defined momentum parallel but not normal to the applied electric field. The model, which is based on a new kind of broken symmetry, also explains the anomalous sign reversal of the derivative of the temperature dependence in the critical regime.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The threshold behavior of the transport properties of a random metal in the critical region near a metal–insulator transition is strongly affected by the measuring electromagnetic fields. In spite of the randomness, the electrical conductivity exhibits striking phase-coherent effects due to broken symmetry, which greatly sharpen the transition compared with the predictions of effective medium theories, as previously explained for electrical conductivities. Here broken symmetry explains the sign reversal of the T → 0 magnetoconductance of the metal–insulator transition in Si(B,P), also previously not understood by effective medium theories. Finally, the symmetry-breaking features of quantum percolation theory explain the unexpectedly very small electrical conductivity temperature exponent α = 0.22(2) recently observed in Ni(S,Se)2 alloys at the antiferromagnetic metal–insulator transition below T = 0.8 K.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Divalent cations are thought essential for motile function of leukocytes in general, and for the function of critical adhesion molecules in particular. In the current study, under direct microscopic observation with concomitant time-lapse video recording, we examined the effects of 10 mM EDTA on locomotion of human blood polymorphonuclear leukocytes (PMN). In very thin slide preparations, EDTA did not impair either random locomotion or chemotaxis; motile behavior appeared to benefit from the close approximation of slide and coverslip (“chimneying”). In preparations twice as thick, PMN in EDTA first exhibited active deformability with little or no displacement, then rounded up and became motionless. However, on creation of a chemotactic gradient, the same cells were able to orient and make their way to the target, often, however, losing momentarily their purchase on the substrate. In either of these preparations without EDTA, specific antibodies to β2 integrins did not prevent random locomotion or chemotaxis, even when we added antibodies to β1 and αvβ3 integrins and to integrin-associated protein, and none of these antibodies added anything to the effects of EDTA. In the more turbulent environment of even more media, effects of anti-β2 integrins became evident: PMN still could locomote but adhered to substrate largely by their uropods and by uropod-associated filaments. We relate these findings to the reported independence from integrins of PMN in certain experimental and disease states. Moreover, we suggest that PMN locomotion in close quarters is not only integrin-independent, but independent of external divalent cations as well.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a natural coordinate system for phylogenetic trees using a correspondence with the set of perfect matchings in the complete graph. This correspondence produces a distance between phylogenetic trees, and a way of enumerating all trees in a minimal step order. It is useful in randomized algorithms because it enables moves on the space of trees that make random optimization strategies “mix” quickly. It also promises a generalization to intermediary trees when data are not decisive as to their choice of tree, and a new way of constructing Bayesian priors on tree space.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe a method for identifying genes encoding proteins with stereospecific intracellular localizations in the fission yeast Schizosaccharomyces pombe. Yeast are transformed with a gene library in which S. pombe genomic sequences are fused to the gene encoding the Aequorea victoria green fluorescent protein (GFP), and intracellular localizations are subsequently identified by rapid fluorescence screening in vivo. In a model application of these methods to the fission yeast nucleus, we have identified several novel genes whose products are found in specific nuclear regions, including chromatin, the nucleolus, and the mitotic spindle, and sequence similarities between some of these genes and previously identified genes encoding nuclear proteins have validated the approach. These methods will be useful in identifying additional components of the S. pombe nucleus, and further extensions of this approach should also be applicable to a more comprehensive identification of the elements of intracellular architecture in fission yeast.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Random walks have been used to describe a wide variety of systems ranging from cell colonies to polymers. Sixty-five years ago, Kuhn [Kuhn, W. (1934) Kolloid-Z. 68, 2–11] made the prediction, backed later by computer simulations, that the overall shape of a random-walk polymer is aspherical, yet no experimental work has directly tested Kuhn's general idea and subsequent computer simulations. By using fluorescence microscopy, we monitored the conformation of individual, long, random-walk polymers (fluorescently labeled DNA molecules) at equilibrium. We found that a polymer most frequently adopts highly extended, nonfractal structures with a strongly anisotropic shape. The ensemble-average ratio of the lengths of the long and short axes of the best-fit ellipse of the polymer was much larger than unity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives To explore trial participants’ understandings of randomisation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sequences that control translation of mRNA may play critical roles in regulating protein levels. One such element is the internal ribosome entry site (IRES). We previously showed that a 9-nt segment in the 5′ leader sequence of the mRNA encoding Gtx homeodomain protein could function as an IRES. To identify other short sequences with similar properties, we designed a selection procedure that uses a retroviral vector to express dicistronic mRNAs encoding enhanced green and cyan fluorescent proteins as the first and second cistrons, respectively. Expression of the second cistron was dependent upon the intercistronic sequences and was indicative of IRES activity. B104 cells were infected with two retroviral libraries that contained random sequences of 9 or 18 nt in the intercistronic region. Cells expressing both cistrons were sorted, and sequences recovered from selected cells were reassayed for IRES activity in a dual luciferase dicistronic mRNA. Two novel IRESes were identified by this procedure, and both contained segments with complementarity to 18S rRNA. When multiple copies of either segment were linked together, IRES activities were dramatically enhanced. Moreover, these synthetic IRESes were differentially active in various cell types. These properties are similar to those of the previously identified 9-nt IRES module from Gtx mRNA. These results provide further evidence that short nucleotide sequences can function as IRESes and support the idea that some cellular IRESes may be composed of shorter functional modules. The ability to identify IRES modules with specific expression properties may be useful in the design of vectors for biotechnology and gene therapy.