974 resultados para Convexity in Graphs


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los hipergrafos dirigidos se han empleado en problemas relacionados con lógica proposicional, bases de datos relacionales, linguística computacional y aprendizaje automático. Los hipergrafos dirigidos han sido también utilizados como alternativa a los grafos (bipartitos) dirigidos para facilitar el estudio de las interacciones entre componentes de sistemas complejos que no pueden ser fácilmente modelados usando exclusivamente relaciones binarias. En este contexto, este tipo de representación es conocida como hiper-redes. Un hipergrafo dirigido es una generalización de un grafo dirigido especialmente adecuado para la representación de relaciones de muchos a muchos. Mientras que una arista en un grafo dirigido define una relación entre dos de sus nodos, una hiperarista en un hipergrafo dirigido define una relación entre dos conjuntos de sus nodos. La conexión fuerte es una relación de equivalencia que divide el conjunto de nodos de un hipergrafo dirigido en particiones y cada partición define una clase de equivalencia conocida como componente fuertemente conexo. El estudio de los componentes fuertemente conexos de un hipergrafo dirigido puede ayudar a conseguir una mejor comprensión de la estructura de este tipo de hipergrafos cuando su tamaño es considerable. En el caso de grafo dirigidos, existen algoritmos muy eficientes para el cálculo de los componentes fuertemente conexos en grafos de gran tamaño. Gracias a estos algoritmos, se ha podido averiguar que la estructura de la WWW tiene forma de “pajarita”, donde más del 70% del los nodos están distribuidos en tres grandes conjuntos y uno de ellos es un componente fuertemente conexo. Este tipo de estructura ha sido también observada en redes complejas en otras áreas como la biología. Estudios de naturaleza similar no han podido ser realizados en hipergrafos dirigidos porque no existe algoritmos capaces de calcular los componentes fuertemente conexos de este tipo de hipergrafos. En esta tesis doctoral, hemos investigado como calcular los componentes fuertemente conexos de un hipergrafo dirigido. En concreto, hemos desarrollado dos algoritmos para este problema y hemos determinado que son correctos y cuál es su complejidad computacional. Ambos algoritmos han sido evaluados empíricamente para comparar sus tiempos de ejecución. Para la evaluación, hemos producido una selección de hipergrafos dirigidos generados de forma aleatoria inspirados en modelos muy conocidos de grafos aleatorios como Erdos-Renyi, Newman-Watts-Strogatz and Barabasi-Albert. Varias optimizaciones para ambos algoritmos han sido implementadas y analizadas en la tesis. En concreto, colapsar los componentes fuertemente conexos del grafo dirigido que se puede construir eliminando ciertas hiperaristas complejas del hipergrafo dirigido original, mejora notablemente los tiempos de ejecucion de los algoritmos para varios de los hipergrafos utilizados en la evaluación. Aparte de los ejemplos de aplicación mencionados anteriormente, los hipergrafos dirigidos han sido también empleados en el área de representación de conocimiento. En concreto, este tipo de hipergrafos se han usado para el cálculo de módulos de ontologías. Una ontología puede ser definida como un conjunto de axiomas que especifican formalmente un conjunto de símbolos y sus relaciones, mientras que un modulo puede ser entendido como un subconjunto de axiomas de la ontología que recoge todo el conocimiento que almacena la ontología sobre un conjunto especifico de símbolos y sus relaciones. En la tesis nos hemos centrado solamente en módulos que han sido calculados usando la técnica de localidad sintáctica. Debido a que las ontologías pueden ser muy grandes, el cálculo de módulos puede facilitar las tareas de re-utilización y mantenimiento de dichas ontologías. Sin embargo, analizar todos los posibles módulos de una ontología es, en general, muy costoso porque el numero de módulos crece de forma exponencial con respecto al número de símbolos y de axiomas de la ontología. Afortunadamente, los axiomas de una ontología pueden ser divididos en particiones conocidas como átomos. Cada átomo representa un conjunto máximo de axiomas que siempre aparecen juntos en un modulo. La decomposición atómica de una ontología es definida como un grafo dirigido de tal forma que cada nodo del grafo corresponde con un átomo y cada arista define una dependencia entre una pareja de átomos. En esta tesis introducimos el concepto de“axiom dependency hypergraph” que generaliza el concepto de descomposición atómica de una ontología. Un modulo en una ontología correspondería con un componente conexo en este tipo de hipergrafos y un átomo de una ontología con un componente fuertemente conexo. Hemos adaptado la implementación de nuestros algoritmos para que funcionen también con axiom dependency hypergraphs y poder de esa forma calcular los átomos de una ontología. Para demostrar la viabilidad de esta idea, hemos incorporado nuestros algoritmos en una aplicación que hemos desarrollado para la extracción de módulos y la descomposición atómica de ontologías. A la aplicación la hemos llamado HyS y hemos estudiado sus tiempos de ejecución usando una selección de ontologías muy conocidas del área biomédica, la mayoría disponibles en el portal de Internet NCBO. Los resultados de la evaluación muestran que los tiempos de ejecución de HyS son mucho mejores que las aplicaciones más rápidas conocidas. ABSTRACT Directed hypergraphs are an intuitive modelling formalism that have been used in problems related to propositional logic, relational databases, computational linguistic and machine learning. Directed hypergraphs are also presented as an alternative to directed (bipartite) graphs to facilitate the study of the interactions between components of complex systems that cannot naturally be modelled as binary relations. In this context, they are known as hyper-networks. A directed hypergraph is a generalization of a directed graph suitable for representing many-to-many relationships. While an edge in a directed graph defines a relation between two nodes of the graph, a hyperedge in a directed hypergraph defines a relation between two sets of nodes. Strong-connectivity is an equivalence relation that induces a partition of the set of nodes of a directed hypergraph into strongly-connected components. These components can be collapsed into single nodes. As result, the size of the original hypergraph can significantly be reduced if the strongly-connected components have many nodes. This approach might contribute to better understand how the nodes of a hypergraph are connected, in particular when the hypergraphs are large. In the case of directed graphs, there are efficient algorithms that can be used to compute the strongly-connected components of large graphs. For instance, it has been shown that the macroscopic structure of the World Wide Web can be represented as a “bow-tie” diagram where more than 70% of the nodes are distributed into three large sets and one of these sets is a large strongly-connected component. This particular structure has been also observed in complex networks in other fields such as, e.g., biology. Similar studies cannot be conducted in a directed hypergraph because there does not exist any algorithm for computing the strongly-connected components of the hypergraph. In this thesis, we investigate ways to compute the strongly-connected components of directed hypergraphs. We present two new algorithms and we show their correctness and computational complexity. One of these algorithms is inspired by Tarjan’s algorithm for directed graphs. The second algorithm follows a simple approach to compute the stronglyconnected components. This approach is based on the fact that two nodes of a graph that are strongly-connected can also reach the same nodes. In other words, the connected component of each node is the same. Both algorithms are empirically evaluated to compare their performances. To this end, we have produced a selection of random directed hypergraphs inspired by existent and well-known random graphs models like Erd˝os-Renyi and Newman-Watts-Strogatz. Besides the application examples that we mentioned earlier, directed hypergraphs have also been employed in the field of knowledge representation. In particular, they have been used to compute the modules of an ontology. An ontology is defined as a collection of axioms that provides a formal specification of a set of terms and their relationships; and a module is a subset of an ontology that completely captures the meaning of certain terms as defined in the ontology. In particular, we focus on the modules computed using the notion of syntactic locality. As ontologies can be very large, the computation of modules facilitates the reuse and maintenance of these ontologies. Analysing all modules of an ontology, however, is in general not feasible as the number of modules grows exponentially in the number of terms and axioms of the ontology. Nevertheless, the modules can succinctly be represented using the Atomic Decomposition of an ontology. Using this representation, an ontology can be partitioned into atoms, which are maximal sets of axioms that co-occur in every module. The Atomic Decomposition is then defined as a directed graph such that each node correspond to an atom and each edge represents a dependency relation between two atoms. In this thesis, we introduce the notion of an axiom dependency hypergraph which is a generalization of the atomic decomposition of an ontology. A module in the ontology corresponds to a connected component in the hypergraph, and the atoms of the ontology to the strongly-connected components. We apply our algorithms for directed hypergraphs to axiom dependency hypergraphs and in this manner, we compute the atoms of an ontology. To demonstrate the viability of this approach, we have implemented the algorithms in the application HyS which computes the modules of ontologies and calculate their atomic decomposition. In the thesis, we provide an experimental evaluation of HyS with a selection of large and prominent biomedical ontologies, most of which are available in the NCBO Bioportal. HyS outperforms state-of-the-art implementations in the tasks of extracting modules and computing the atomic decomposition of these ontologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La región cerca de la pared de flujos turbulentos de pared ya está bien conocido debido a su bajo número de Reynolds local y la separación escala estrecha. La región lejos de la pared (capa externa) no es tan interesante tampoco, ya que las estadísticas allí se escalan bien por las unidades exteriores. La región intermedia (capa logarítmica), sin embargo, ha estado recibiendo cada vez más atención debido a su propiedad auto-similares. Además, de acuerdo a Flores et al. (2007) y Flores & Jiménez (2010), la capa logarítmica es más o menos independiente de otras capas, lo que implica que podría ser inspeccionado mediante el aislamiento de otras dos capas, lo que reduciría significativamente los costes computacionales para la simulación de flujos turbulentos de pared. Algunos intentos se trataron después por Mizuno & Jiménez (2013), quien simulan la capa logarítmica sin la región cerca de la pared con estadísticas obtenidas de acuerdo razonablemente bien con los de las simulaciones completas. Lo que más, la capa logarítmica podría ser imitado por otra turbulencia sencillo de cizallamiento de motor. Por ejemplo, Pumir (1996) encontró que la turbulencia de cizallamiento homogéneo estadísticamente estacionario (SS-HST) también irrumpe, de una manera muy similar al proceso de auto-sostenible en flujos turbulentos de pared. Según los consideraciones arriba, esta tesis trata de desvelar en qué medida es la capa logarítmica de canales similares a la turbulencia de cizalla más sencillo, SS-HST, mediante la comparación de ambos cinemática y la dinámica de las estructuras coherentes en los dos flujos. Resultados sobre el canal se muestran mediante Lozano-Durán et al. (2012) y Lozano-Durán & Jiménez (2014b). La hoja de ruta de esta tarea se divide en tres etapas. En primer lugar, SS-HST es investigada por medio de un código nuevo de simulación numérica directa, espectral en las dos direcciones horizontales y compacto-diferencias finitas en la dirección de la cizalla. Sin utiliza remallado para imponer la condición de borde cortante periódica. La influencia de la geometría de la caja computacional se explora. Ya que el HST no tiene ninguna longitud característica externa y tiende a llenar el dominio computacional, las simulaciopnes a largo plazo del HST son ’mínimos’ en el sentido de que contiene sólo unas pocas estructuras media a gran escala. Se ha encontrado que el límite principal es el ancho de la caja de la envergadura, Lz, que establece las escalas de longitud y velocidad de la turbulencia, y que las otras dos dimensiones de la caja debe ser suficientemente grande (Lx > 2LZ, Ly > Lz) para evitar que otras direcciones estando limitado también. También se encontró que las cajas de gran longitud, Lx > 2Ly, par con el paso del tiempo la condición de borde cortante periódica, y desarrollar fuertes ráfagas linealizadas no físicos. Dentro de estos límites, el flujo muestra similitudes y diferencias interesantes con otros flujos de cizalla, y, en particular, con la capa logarítmica de flujos turbulentos de pared. Ellos son exploradas con cierto detalle. Incluyen un proceso autosostenido de rayas a gran escala y con una explosión cuasi-periódica. La escala de tiempo de ruptura es de aproximadamente universales, ~20S~l(S es la velocidad de cizallamiento media), y la disponibilidad de dos sistemas de ruptura diferentes permite el crecimiento de las ráfagas a estar relacionado con algo de confianza a la cizalladura de turbulencia inicialmente isotrópico. Se concluye que la SS-HST, llevado a cabo dentro de los parámetros de cílculo apropiados, es un sistema muy prometedor para estudiar la turbulencia de cizallamiento en general. En segundo lugar, las mismas estructuras coherentes como en los canales estudiados por Lozano-Durán et al. (2012), es decir, grupos de vórticidad (fuerte disipación) y Qs (fuerte tensión de Reynolds tangencial, -uv) tridimensionales, se estudia mediante simulación numérica directa de SS-HST con relaciones de aspecto de cuadro aceptables y número de Reynolds hasta Rex ~ 250 (basado en Taylor-microescala). Se discute la influencia de la intermitencia de umbral independiente del tiempo. Estas estructuras tienen alargamientos similares en la dirección sentido de la corriente a las familias separadas en los canales hasta que son de tamaño comparable a la caja. Sus dimensiones fractales, longitudes interior y exterior como una función del volumen concuerdan bien con sus homólogos de canales. El estudio sobre sus organizaciones espaciales encontró que Qs del mismo tipo están alineados aproximadamente en la dirección del vector de velocidad en el cuadrante al que pertenecen, mientras Qs de diferentes tipos están restringidos por el hecho de que no debe haber ningún choque de velocidad, lo que hace Q2s (eyecciones, u < 0,v > 0) y Q4s (sweeps, u > 0,v < 0) emparejado en la dirección de la envergadura. Esto se verifica mediante la inspección de estructuras de velocidad, otros cuadrantes como la uw y vw en SS-HST y las familias separadas en el canal. La alineación sentido de la corriente de Qs ligada a la pared con el mismo tipo en los canales se debe a la modulación de la pared. El campo de flujo medio condicionado a pares Q2-Q4 encontró que los grupos de vórticidad están en el medio de los dos, pero prefieren los dos cizalla capas alojamiento en la parte superior e inferior de Q2s y Q4s respectivamente, lo que hace que la vorticidad envergadura dentro de las grupos de vórticidad hace no cancele. La pared amplifica la diferencia entre los tamaños de baja- y alta-velocidad rayas asociados con parejas de Q2-Q4 se adjuntan como los pares alcanzan cerca de la pared, el cual es verificado por la correlación de la velocidad del sentido de la corriente condicionado a Q2s adjuntos y Q4s con diferentes alturas. Grupos de vórticidad en SS-HST asociados con Q2s o Q4s también están flanqueadas por un contador de rotación de los vórtices sentido de la corriente en la dirección de la envergadura como en el canal. La larga ’despertar’ cónica se origina a partir de los altos grupos de vórticidad ligada a la pared han encontrado los del Álamo et al. (2006) y Flores et al. (2007), que desaparece en SS-HST, sólo es cierto para altos grupos de vórticidad ligada a la pared asociados con Q2s pero no para aquellos asociados con Q4s, cuyo campo de flujo promedio es en realidad muy similar a la de SS-HST. En tercer lugar, las evoluciones temporales de Qs y grupos de vórticidad se estudian mediante el uso de la método inventado por Lozano-Durán & Jiménez (2014b). Las estructuras se clasifican en las ramas, que se organizan más en los gráficos. Ambas resoluciones espaciales y temporales se eligen para ser capaz de capturar el longitud y el tiempo de Kolmogorov puntual más probable en el momento más extrema. Debido al efecto caja mínima, sólo hay un gráfico principal consiste en casi todas las ramas, con su volumen y el número de estructuras instantáneo seguien la energía cinética y enstrofía intermitente. La vida de las ramas, lo que tiene más sentido para las ramas primarias, pierde su significado en el SS-HST debido a las aportaciones de ramas primarias al total de Reynolds estrés o enstrofía son casi insignificantes. Esto también es cierto en la capa exterior de los canales. En cambio, la vida de los gráficos en los canales se compara con el tiempo de ruptura en SS-HST. Grupos de vórticidad están asociados con casi el mismo cuadrante en términos de sus velocidades medias durante su tiempo de vida, especialmente para los relacionados con las eyecciones y sweeps. Al igual que en los canales, las eyecciones de SS-HST se mueven hacia arriba con una velocidad promedio vertical uT (velocidad de fricción) mientras que lo contrario es cierto para los barridos. Grupos de vórticidad, por otra parte, son casi inmóvil en la dirección vertical. En la dirección de sentido de la corriente, que están advección por la velocidad media local y por lo tanto deforman por la diferencia de velocidad media. Sweeps y eyecciones se mueven más rápido y más lento que la velocidad media, respectivamente, tanto por 1.5uT. Grupos de vórticidad se mueven con la misma velocidad que la velocidad media. Se verifica que las estructuras incoherentes cerca de la pared se debe a la pared en vez de pequeño tamaño. Los resultados sugieren fuertemente que las estructuras coherentes en canales no son especialmente asociado con la pared, o incluso con un perfil de cizalladura dado. ABSTRACT Since the wall-bounded turbulence was first recognized more than one century ago, its near wall region (buffer layer) has been studied extensively and becomes relatively well understood due to the low local Reynolds number and narrow scale separation. The region just above the buffer layer, i.e., the logarithmic layer, is receiving increasingly more attention nowadays due to its self-similar property. Flores et al. (20076) and Flores & Jim´enez (2010) show that the statistics of logarithmic layer is kind of independent of other layers, implying that it might be possible to study it separately, which would reduce significantly the computational costs for simulations of the logarithmic layer. Some attempts were tried later by Mizuno & Jimenez (2013), who simulated the logarithmic layer without the buffer layer with obtained statistics agree reasonably well with those of full simulations. Besides, the logarithmic layer might be mimicked by other simpler sheardriven turbulence. For example, Pumir (1996) found that the statistically-stationary homogeneous shear turbulence (SS-HST) also bursts, in a manner strikingly similar to the self-sustaining process in wall-bounded turbulence. Based on these considerations, this thesis tries to reveal to what extent is the logarithmic layer of channels similar to the simplest shear-driven turbulence, SS-HST, by comparing both kinematics and dynamics of coherent structures in the two flows. Results about the channel are shown by Lozano-Dur´an et al. (2012) and Lozano-Dur´an & Jim´enez (20146). The roadmap of this task is divided into three stages. First, SS-HST is investigated by means of a new direct numerical simulation code, spectral in the two horizontal directions and compact-finite-differences in the direction of the shear. No remeshing is used to impose the shear-periodic boundary condition. The influence of the geometry of the computational box is explored. Since HST has no characteristic outer length scale and tends to fill the computational domain, longterm simulations of HST are ‘minimal’ in the sense of containing on average only a few large-scale structures. It is found that the main limit is the spanwise box width, Lz, which sets the length and velocity scales of the turbulence, and that the two other box dimensions should be sufficiently large (Lx > 2LZ, Ly > Lz) to prevent other directions to be constrained as well. It is also found that very long boxes, Lx > 2Ly, couple with the passing period of the shear-periodic boundary condition, and develop strong unphysical linearized bursts. Within those limits, the flow shows interesting similarities and differences with other shear flows, and in particular with the logarithmic layer of wallbounded turbulence. They are explored in some detail. They include a self-sustaining process for large-scale streaks and quasi-periodic bursting. The bursting time scale is approximately universal, ~ 20S~l (S is the mean shear rate), and the availability of two different bursting systems allows the growth of the bursts to be related with some confidence to the shearing of initially isotropic turbulence. It is concluded that SS-HST, conducted within the proper computational parameters, is a very promising system to study shear turbulence in general. Second, the same coherent structures as in channels studied by Lozano-Dur´an et al. (2012), namely three-dimensional vortex clusters (strong dissipation) and Qs (strong tangential Reynolds stress, -uv), are studied by direct numerical simulation of SS-HST with acceptable box aspect ratios and Reynolds number up to Rex ~ 250 (based on Taylor-microscale). The influence of the intermittency to time-independent threshold is discussed. These structures have similar elongations in the streamwise direction to detached families in channels until they are of comparable size to the box. Their fractal dimensions, inner and outer lengths as a function of volume agree well with their counterparts in channels. The study about their spatial organizations found that Qs of the same type are aligned roughly in the direction of the velocity vector in the quadrant they belong to, while Qs of different types are restricted by the fact that there should be no velocity clash, which makes Q2s (ejections, u < 0, v > 0) and Q4s (sweeps, u > 0, v < 0) paired in the spanwise direction. This is verified by inspecting velocity structures, other quadrants such as u-w and v-w in SS-HST and also detached families in the channel. The streamwise alignment of attached Qs with the same type in channels is due to the modulation of the wall. The average flow field conditioned to Q2-Q4 pairs found that vortex clusters are in the middle of the pair, but prefer to the two shear layers lodging at the top and bottom of Q2s and Q4s respectively, which makes the spanwise vorticity inside vortex clusters does not cancel. The wall amplifies the difference between the sizes of low- and high-speed streaks associated with attached Q2-Q4 pairs as the pairs reach closer to the wall, which is verified by the correlation of streamwise velocity conditioned to attached Q2s and Q4s with different heights. Vortex clusters in SS-HST associated with Q2s or Q4s are also flanked by a counter rotating streamwise vortices in the spanwise direction as in the channel. The long conical ‘wake’ originates from tall attached vortex clusters found by del A´ lamo et al. (2006) and Flores et al. (2007b), which disappears in SS-HST, is only true for tall attached vortices associated with Q2s but not for those associated with Q4s, whose averaged flow field is actually quite similar to that in SS-HST. Third, the temporal evolutions of Qs and vortex clusters are studied by using the method invented by Lozano-Dur´an & Jim´enez (2014b). Structures are sorted into branches, which are further organized into graphs. Both spatial and temporal resolutions are chosen to be able to capture the most probable pointwise Kolmogorov length and time at the most extreme moment. Due to the minimal box effect, there is only one main graph consist by almost all the branches, with its instantaneous volume and number of structures follow the intermittent kinetic energy and enstrophy. The lifetime of branches, which makes more sense for primary branches, loses its meaning in SS-HST because the contributions of primary branches to total Reynolds stress or enstrophy are almost negligible. This is also true in the outer layer of channels. Instead, the lifetime of graphs in channels are compared with the bursting time in SS-HST. Vortex clusters are associated with almost the same quadrant in terms of their mean velocities during their life time, especially for those related with ejections and sweeps. As in channels, ejections in SS-HST move upwards with an average vertical velocity uτ (friction velocity) while the opposite is true for sweeps. Vortex clusters, on the other hand, are almost still in the vertical direction. In the streamwise direction, they are advected by the local mean velocity and thus deformed by the mean velocity difference. Sweeps and ejections move faster and slower than the mean velocity respectively, both by 1.5uτ . Vortex clusters move with the same speed as the mean velocity. It is verified that the incoherent structures near the wall is due to the wall instead of small size. The results suggest that coherent structures in channels are not particularly associated with the wall, or even with a given shear profile.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have studied the adsorption of two structurally similar forms of hemoglobin (met-Hb and HbCO) to a hydrophobic self-assembled methyl-terminated thiol monolayer on a gold surface, by using a Quartz Crystal Microbalance (QCM) technique. This technique allows time-resolved simultaneous measurements of changes in frequency (f) (c.f. mass) and energy dissipation (D) (c.f. rigidity/viscoelastic properties) of the QCM during the adsorption process, which makes it possible to investigate the viscoelastic properties of the different protein layers during the adsorption process. Below the isoelectric points of both met-Hb and HbCO, the ΔD vs. Δf graphs displayed two phases with significantly different slopes, which indicates two states of the adsorbed proteins with different visco-elastic properties. The slope of the first phase was smaller than that of the second phase, which indicates that the first phase was associated with binding of a more rigidly attached, presumably denatured protein layer, whereas the second phase was associated with formation of a second layer of more loosely bound proteins. This second layer desorbed, e.g., upon reduction of Fe3+ of adsorbed met-Hb and subsequent binding of carbon monoxide (CO) forming HbCO. Thus, the results suggest that the adsorbed proteins in the second layer were in a native-like state. This information could only be obtained from simultaneous, time-resolved measurements of changes in both D and f, demonstrating that the QCM technique provides unique information about the mechanisms of protein adsorption to solid surfaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Postprint

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Graphs of second harmonic generation coefficients and electro-optic coefficients (measured by ellipsometry, attenuated total reflection, and two-slit interference modulation) as a function of chromophore number density (chromophore loading) are experimentally observed to exhibit maxima for polymers containing chromophores characterized by large dipole moments and polarizabilities. Modified London theory is used to demonstrated that this behavior can be attributed to the competition of chromophore-applied electric field and chromophore–chromophore electrostatic interactions. The comparison of theoretical and experimental data explains why the promise of exceptional macroscopic second-order optical nonlinearity predicted for organic materials has not been realized and suggests routes for circumventing current limitations to large optical nonlinearity. The results also suggest extensions of measurement and theoretical methods to achieve an improved understanding of intermolecular interactions in condensed phase materials including materials prepared by sequential synthesis and block copolymer methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual habit formation in monkeys, assessed by concurrent visual discrimination learning with 24-h intertrial intervals (ITI), was found earlier to be impaired by removal of the inferior temporal visual area (TE) but not by removal of either the medial temporal lobe or inferior prefrontal convexity, two of TE's major projection targets. To assess the role in this form of learning of another pair of structures to which TE projects, namely the rostral portion of the tail of the caudate nucleus and the overlying ventrocaudal putamen, we injected a neurotoxin into this neostriatal region of several monkeys and tested them on the 24-h ITI task as well as on a test of visual recognition memory. Compared with unoperated monkeys, the experimental animals were unaffected on the recognition test but showed an impairment on the 24-h ITI task that was highly correlated with the extent of their neostriatal damage. The findings suggest that TE and its projection areas in the ventrocaudal neostriatum form part of a circuit that selectively mediates visual habit formation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on Tversky and Kahneman’s Prospect Theory, we test the existence of reference dependence, loss aversion and diminishing sensitivity in Spanish tourism. To do this, we incorporate the reference-dependent model into a Multinomial Logit Model with Random Parameters -which controls for heterogeneity- and apply it to a sample of vacation choices made by Spaniards. We find that the difference between reference price and actual price is considered to make decisions, confirming that reference dependence exists; that people react more strongly to price increases than to price decreases relative to their reference price, which represents evidence in favor of the loss aversion phenomenon; and that there is diminishing sensitivity for losses only, showing convexity for these negative values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a novel method for the unsupervised clustering of graphs in the context of the constellation approach to object recognition. Such method is an EM central clustering algorithm which builds prototypical graphs on the basis of fast matching with graph transformations. Our experiments, both with random graphs and in realistic situations (visual localization), show that our prototypes improve the set median graphs and also the prototypes derived from our previous incremental method. We also discuss how the method scales with a growing number of images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Applied colorimetry is an important module in the program of the elective subject "Colour Science: industrial applications”. This course is taught in the Optics and Optometry Degree and it has been used as a testing for the application of new teaching and assessment techniques consistent with the new European Higher Education Area. In particular, the main objective was to reduce the attendance to lessons and encourage the individual and collective work of students. The reason for this approach is based on the idea that students are able to work at their own learning pace. Within this dynamic work, we propose online lab practice based on Excel templates that our research group has developed ad-hoc for different aspects of colorimetry, such as conversion to different colour spaces, calculation of perceptual descriptors (hue, saturation, lightness), calculation of colour differences, colour matching dyes, etc. The practice presented in this paper is focused on the learning of colour differences. The session is based on a specific Excel template to compute the colour differences and to plot different graphs with these colour differences defined at different colour spaces: CIE ΔE, CIE ΔE94 and the CIELAB colour space. This template is implemented on a website what works by addressing the student work at a proper and organized way. The aim was to unify all the student work from a website, therefore the student is able to learn in an autonomous and sequential way and in his own pace. To achieve this purpose, all the tools, links and documents are collected for each different proposed activity to achieve guided specific objectives. In the context of educational innovation, this type of website is normally called WebQuest. The design of a WebQuest is established according to the criteria of usability and simplicity. There are great advantages of using WebQuests versus the toolbox “Campus Virtual” available in the University of Alicante. The Campus Virtual is an unfriendly environment for this specific purpose as the activities are organized in different sectors depending on whether the activity is a discussion, an activity, a self-assessment or the download of materials. With this separation, it is more difficult that the student follows an organized sequence. However, our WebQuest provides a more intuitive graphical environment, and besides, all the tasks and resources needed to complete them are grouped and organized according to a linear sequence. In this way, the student guided learning is optimized. Furthermore, with this simplification, the student focuses on learning and not to waste resources. Finally, this tool has a wide set of potential applications: online courses of colorimetry applied for postgraduate students, Open Course Ware, etc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Examining a team’s performance from a physical point of view their momentum might indicate unexpected turning points in defeat or success. Physicists describe this value as to require some effort to be started, but also that it is relatively easy to keep it going once a sufficient level is reached (Reed and Hughes, 2006). Unlike football, rugby, handball and many more sports, a regular volleyball match is not limited by time but by points that need to be gathered. Every minute more than one point is won by either one team or the other. That means a series of successive points enlarges the gap between the teams making it more and more difficult to catch up with the leading one. This concept of gathering momentum, or the reverse in a performance, can give the coaches, athletes and sports scientists further insights into winning and losing performances. Momentum investigations also contain dependencies between performances or questions if future performances are reliant upon past streaks. Squash and volleyball share the characteristic of being played up to a certain amount of points. Squash was examined according to the momentum of players by Hughes et al. (2006). The initial aim was to expand normative profiles of elite squash players using momentum graphs of winners and errors to explore ‘turning points’ in a performance. Dynamic systems theory has enabled the definition of perturbations in sports exhibiting rhythms (Hughes et al., 2000; McGarry et al., 2002; Murray et al., 2008), and how players and teams cause these disruptions of rhythm can inform on the way they play, these techniques also contribute to profiling methods. Together with the analysis of one’s own performance it is essential to have an understanding of your oppositions’ tactical strengths and weaknesses. By modelling the oppositions’ performance it is possible to predict certain outcomes and patterns, and therefore intervene or change tactics before the critical incident occurs. The modelling of competitive sport is an informative analytic technique as it directs the attention of the modeller to the critical aspects of data that delineate successful performance (McGarry & Franks, 1996). Using tactical performance profiles to pull out and visualise these critical aspects of performance, players can build justified and sophisticated tactical plans. The area is discussed and reviewed, critically appraising the research completed in this element of Performance Analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistics can be useful when assessing the practical relevance of varying rules and practices on the involuntary loss of nationality across EU member states. Yet while much progress has been made within the EU in recent years with regard to the collection of comparable and reliable information on the acquisition of nationality, statistics on the loss of nationality are hard to find and, where available, difficult to interpret. In this comparative report, the authors explore the landscape of existing statistical data on loss of nationality in the European Union. They identify challenges to the existing methods of data collection and data interpretation and introduce an online statistical database, bringing together all existing statistical data on loss of nationality in the EU. These data are summarised in tables and graphs and discussed with reference to the relevant national and European sources. The authors conclude with recommendations to policy-makers on how to improve data collection in this area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research analysis of electrocardiograms (ECG) today is carried out mostly using time depending signals of different leads shown in the graphs. Definition of ECG parameters is performed by qualified personnel, and requiring particular skills. To support decoding the cardiac depolarization phase of ECG there are methods to analyze space-time convolution charts in three dimensions where the heartbeat is described by the trajectory of its electrical vector. Based on this, it can be assumed that all available options of the classical ECG analysis of this time segment can be obtained using this technique. Investigated ECG visualization techniques in three dimensions combined with quantitative methods giving additional features of cardiac depolarization and allow a better exploitation of the information content of the given ECG signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Investigating the relationship between factors (climate change, atmospheric CO2 concentrations enrichment, and vegetation structure) and hydrological processes is important for understanding and predicting the interaction between the hydrosphere and biosphere. The Integrated Biosphere Simulator (IBIS) was used to evaluate the effects of climate change, rising CO2, and vegetation structure on hydrological processes in China at the end of the 21st century. Seven simulations were implemented using the assemblage of the IPCC climate and CO2 concentration scenarios, SRES A2 and SRES B1. Analysis results suggest that (1) climate change will have increasing effects on runoff evapotranspiration (ET), transpiration (T), and transpiration ratio (transpiration/evapotranspiration, T/E) in most hydrological regions of China except in the southernmost regions; (2) elevated CO2 concentrations will have increasing effects on runoff at the national scale, but at the hydrological region scale, the physiology effects induced by elevated CO2 concentration will depend on the vegetation types, climate conditions, and geographical background information with noticeable decreasing effects shown in the arid Inland region of China; (3) leaf area index (LAI) compensation effect and stomatal closure effect are the dominant factors on runoff in the arid Inland region and southern moist hydrological regions, respectively; (4) the magnitudes of climate change (especially the changing precipitation pattern) effects on the water cycle are much larger than those of the elevated CO2 concentration effects; however, increasing CO2 concentration will be one of the most important modifiers to the water cycle; (5) the water resource condition will be improved in northern China but depressed in southernmost China under the IPCC climate change scenarios, SRES A2 and SRES B1.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dinoflagellate cysts are useful for reconstructing upper water conditions. For adequate reconstructions detailed information is required about the relationship between modern day environmental conditions and the geographic distribution of cysts in sediments. This Atlas summarises the modern global distribution of 71 organicwalled dinoflagellate cyst species. The synthesis is based on the integration of literature sources together with data of 2405 globally distributed surface sediment samples that have been preparedwith a comparable methodology and taxonomy. The distribution patterns of individual cyst species are being comparedwith environmental factors that are knownto influence dinoflagellate growth, gamete production, encystment, excystment and preservation of their organic-walled cysts: surface water temperature, salinity, nitrate, phosphate, chlorophyll-a concentrations and bottom water oxygen concentrations. Graphs are provided for every species depicting the relationship between seasonal and annual variations of these parameters and the relative abundance of the species. Results have been compared with previously published records; an overview of the ecological significance as well as information about the seasonal production of each individual species is presented. The relationship between the cyst distribution and variation in the aforementioned environmental parameters was analysed by performing a canonical correspondence analysis. All tested variables showed a positive relationship on the 99% confidence level. Sea-surface temperature represents the parameter corresponding to the largest amount of variance within the dataset (40%) followed by nitrate, salinity, phosphate and bottom-water oxygen concentration, which correspond to 34%, 33%, 25% and 24% of the variance, respectively. Characterisations of selected environments as well as a discussion about how these factors could have influenced the final cyst yield in sediments are included.