999 resultados para Atomic systems
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
During the last 30 years the Atomic Force Microscopy became the most powerful tool for surface probing in atomic scale. The Tapping-Mode Atomic Force Microscope is used to generate high quality accurate images of the samples surface. However, in this mode of operation the microcantilever frequently presents chaotic motion due to the nonlinear characteristics of the tip-sample forces interactions, degrading the image quality. This kind of irregular motion must be avoided by the control system. In this work, the tip-sample interaction is modelled considering the Lennard-Jones potentials and the two-term Galerkin aproximation. Additionally, the State Dependent Ricatti Equation and Time-Delayed Feedback Control techniques are used in order to force the Tapping-Mode Atomic Force Microscope system motion to a periodic orbit, preventing the microcantilever chaotic motion
Resumo:
We study the thermodynamic properties of a certain type of space-inhomogeneous Fermi and quantum spin systems on lattices. We are particularly interested in the case where the space scale of the inhomogeneities stays macroscopic, but very small as compared to the side-length of the box containing fermions or spins. The present study is however not restricted to "macroscopic inhomogeneities" and also includes the (periodic) microscopic and mesoscopic cases. We prove that - as in the homogeneous case - the pressure is, up to a minus sign, the conservative value of a two-person zero-sum game, named here thermodynamic game. Because of the absence of space symmetries in such inhomogeneous systems, it is not clear from the beginning what kind of object equilibrium states should be in the thermodynamic limit. However, we give rigorous statements on correlations functions for large boxes. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4763465]
Resumo:
A detailed theoretical study of the 1,7,1l,17-tetraoxa-2,6,12,16-tetraaza-cycloeicosane ligand ([20]AneN(4)O(4)) coordinated to Fe2+, Co2+, Ni2+, Ru2+, Rh2+, and Pd2+ transition metal ions was carried out with the B3LYP method. Two different cases were performed: when nitrogen is the donor atom (1a (q) ) and also with the oxygen as the donor atom (1b (q) ). For all the cases performed in this study 1a (q) structures were always more stable than the 1b (q) ones. Considering each row is possible to see that the energy increases with the increase of the atomic number. The M2+ cation binding energies for the 1a (q) complexes increase with the following order: Fe2+ < Ru2+ < Co2+ < Ni2+ < Rh2+ < Pd2+.
Resumo:
Root canal preparation may damage NiTi instruments resulting in wear and deformation. The aim of this study was to make a comparative evaluation of the surface topography of the cervical third of four different rotary systems, before and after being used twelve times, in 1.440 resin blocks with simulated root canals with standardized 45 degrees curvatures, and analyzed by atomic force microscopy AFM. The blocks were divided into four groups and prepared according to the manufacturers recommendations: Group 1 - K3 (R); Group 2 - Protaper Universal (R); Group 3 - Twisted Files (R) and Group 4 - Biorace (R). After each preparation, the instruments were washed and autoclaved. A total of 240 instruments were selected, being 30 new instruments and 30 after having been used for the 12th time, from each group. These instruments were analyzed by AFM and for quantitative evaluation, the mean RMS (Root mean square) values of the cervical third of the specimens from the four groups were used. The result showed that all the rotary files used for the 12th time suffered wear with change in the topography of the cervical region of the active portion of the file (ANOVA p < 0.01). Classifying the specimens in increasing order, from the least to the greatest wear suffered, Group 3 (2.8993 nm) presented the least wear, followed by Group 4 (12.2520 nm), Group 1 (36.0043 nm) and lastly, Group 2 (59.8750 nm) with the largest amount of cervical surface wear. Microsc. Res. Tech. 75:97-102, 2012. (c) 2011 Wiley Periodicals, Inc.
Resumo:
In molecular and atomic devices the interaction between electrons and ionic vibrations has an important role in electronic transport. The electron-phonon coupling can cause the loss of the electron's phase coherence, the opening of new conductance channels and the suppression of purely elastic ones. From the technological viewpoint phonons might restrict the efficiency of electronic devices by energy dissipation, causing heating, power loss and instability. The state of the art in electron transport calculations consists in combining ab initio calculations via Density Functional Theory (DFT) with Non-Equilibrium Green's Function formalism (NEGF). In order to include electron-phonon interactions, one needs in principle to include a self-energy scattering term in the open system Hamiltonian which takes into account the effect of the phonons over the electrons and vice versa. Nevertheless this term could be obtained approximately by perturbative methods. In the First Born Approximation one considers only the first order terms of the electronic Green's function expansion. In the Self-Consistent Born Approximation, the interaction self-energy is calculated with the perturbed electronic Green's function in a self-consistent way. In this work we describe how to incorporate the electron-phonon interaction to the SMEAGOL program (Spin and Molecular Electronics in Atomically Generated Orbital Landscapes), an ab initio code for electronic transport based on the combination of DFT + NEGF. This provides a tool for calculating the transport properties of materials' specific system, particularly in molecular electronics. Preliminary results will be presented, showing the effects produced by considering the electron-phonon interaction in nanoscale devices.
Resumo:
In this work the growth and the magnetic properties of the transition metals molybdenum, niobium, and iron and of the highly-magnetostrictive C15 Laves phases of the RFe2 compounds (R: Rare earth metals: here Tb, Dy, and Tb{0.3}Dy{0.7} deposited on alpha-Al2O3 (sapphire) substrates are analyzed. Next to (11-20) (a-plane) oriented sapphire substrates mainly (10-10) (m-plane) oriented substrates were used. These show a pronounced facetting after high temperature annealing in air. Atomic force microscopy (AFM) measurements reveal a dependence of the height, width, and angle of the facets with the annealing temperature. The observed deviations of the facet angles with respect to the theoretical values of the sapphire (10-1-2) and (10-11) surfaces are explained by cross section high resolution transmission electron microscopy (HR-TEM) measurements. These show the plain formation of the (10-11) surface while the second, energy reduced (10-1-2) facet has a curved shape given by atomic steps of (10-1-2) layers and is formed completely solely at the facet ridges and valleys. Thin films of Mo and Nb, respectively, deposited by means of molecular beam epitaxy (MBE) reveal a non-twinned, (211)-oriented epitaxial growth as well on non-faceted as on faceted sapphire m-plane, as was shown by X-Ray and TEM evaluations. In the case of faceted sapphire the two bcc crystals overgrow the facets homogeneously. Here, the bcc (111) surface is nearly parallel to the sapphire (10-11) facet and the Mo/Nb (100) surface is nearly parallel to the sapphire (10-1-2) surface. (211)-oriented Nb templates on sapphire m-plane can be used for the non-twinned, (211)-oriented growth of RFe2 films by means of MBE. Again, the quality of the RFe2 films grown on faceted sapphire is almost equal to films on the non-faceted substrate. For comparison thin RFe2 films of the established (110) and (111) orientation were prepared. Magnetic and magnetoelastic measurements performed in a self designed setup reveal a high quality of the samples. No difference between samples with undulated and flat morphology can be observed. In addition to the preparation of covering, undulating thin films on faceted sapphire m-plane nanoscopic structures of Nb and Fe were prepared by shallow incidence MBE. The formation of the nanostructures can be explained by a shadowing of the atomic beam due to the facets in addition to de-wetting effects of the metals on the heated sapphire surface. Accordingly, the nanostructures form at the facet ridges and overgrow them. The morphology of the structures can be varied by deposition conditions as was shown for Fe. The shape of the structures vary from pearl-necklet strung spherical nanodots with a diameter of a few 10 nm to oval nanodots of a few 100 nm length to continuous nanowires. Magnetization measurements reveal uniaxial magnetic anisotropy with the easy axis of magnetization parallel to the facet ridges. The shape of the hysteresis is depending on the morphology of the structures. The magnetization reversal processes of the spherical and oval nanodots were simulated by micromagnetic modelling and can be explained by the formation of magnetic vortices.
Resumo:
Die kollineare Laserspektroskopie hat sich in den vergangenen drei Jahrzehnten zur Bestimmung der Kernladungsradien mittelschwerer und schwerer kurzlebiger Atomkerne in ausgezeichneter Weise bewährt. Auf die Isotope sehr leichter Elemente konnte sie allerdings erst kürzlich erweitert werden. Dieser Bereich der Nuklidkarte ist von besonderem Interesse, denn die ersten ab-initio Modelle der Kernphysik, die den Aufbau eines Atomkerns basierend auf individuellen Nukleonen und realistischenWechselwirkungspotentialen beschreiben, sind gegenwärtig nur für die leichtesten Elemente anwendbar. Außerdem existiertrnin dieser Region eine besonders exotische Form von Atomkernen, die sogenanntenrnHalokerne. Die Isotopenkette der Berylliumisotope zeichnet sich durch das Auftreten des Ein-Neutronen Halokerns 11Be und des Zwei- oder Vier-Neutronen-Halos 14Be aus. Dem Isotop 12Be kommt durch seine Position zwischen diesen beiden Exoten und den im Schalenmodell erwarteten magischen Schalenabschluss N = 8 eine besondere Bedeutung zu.rnIm Rahmen dieser Arbeit wurden mehrere frequenzstabilisierte Lasersysteme für die kollineare Laserspektroskopie aufgebaut. An TRIGA-SPEC stehen nun unter anderem ein frequenzverdoppeltes Diodenlasersystem mit Trapezverstärker und frequenzkammstabilisierter Titan-Saphirlaser mit Frequenzverdopplungsstufe für die Spektroskopie an refraktären Elementen oberhalb von Molybdän zur Verfügung, die für erste Testexperimente eingesetzt wurden. Außerdem wurde die effiziente Frequenzvervierfachung eines Titan-Saphirlasers demonstriert. An ISOLDE/CERN wurde ein frequenzkammstabilisierter und ein jodstabilisierter Farbstofflaser installiert und für die Laserspektroskopie an 9,10,11,12Be eingesetzt. Durch das verbesserte Lasersystem und den Einsatz eines verzögerten Koinzidenznachweises für Photonen und Ionen gelang es die Empfindlichkeitrnder Berylliumspektroskopie um mehr als zwei Größenordnungen zu steigern und damit die früheren Messungen an 7−11Be erstmals auf das Isotop 12Be auszuweiten. Außerdem wurde die Genauigkeit der absoluten Übergangsfrequenzen und der Isotopieverschiebungen der Isotope 9,10,11Be signifikant verbessert.rnDurch den Vergleich mit Ergebnissen des Fermionic Molecular Dynamics Modells kann der Trend der Ladungsradien der leichteren Isotope durch die ausgeprägte Clusterstruktur der Berylliumkerne erklärt werden. Für 12Be wird ersichtlich, dass der Grundzustand durch eine (sd)2 Konfiguration statt der vom Schalenmodell erwarteten p2 Konfiguration dominiert wird. Dies ist ein klares Indiz für das bereits zuvor beobachtete Verschwinden des N = 8 Schalenabschlusses bei 12Be.
Resumo:
The remarkable advances in nanoscience and nanotechnology over the last two decades allow one to manipulate individuals atoms, molecules and nanostructures, make it possible to build devices with only a few nanometers, and enhance the nano-bio fusion in tackling biological and medical problems. It complies with the ever-increasing need for device miniaturization, from magnetic storage devices, electronic building blocks for computers, to chemical and biological sensors. Despite the continuing efforts based on conventional methods, they are likely to reach the fundamental limit of miniaturization in the next decade, when feature lengths shrink below 100 nm. On the one hand, quantum mechanical efforts of the underlying material structure dominate device characteristics. On the other hand, one faces the technical difficulty in fabricating uniform devices. This has posed a great challenge for both the scientific and the technical communities. The proposal of using a single or a few organic molecules in electronic devices has not only opened an alternative way of miniaturization in electronics, but also brought up brand-new concepts and physical working mechanisms in electronic devices. This thesis work stands as one of the efforts in understanding and building of electronic functional units at the molecular and atomic levels. We have explored the possibility of having molecules working in a wide spectrum of electronic devices, ranging from molecular wires, spin valves/switches, diodes, transistors, and sensors. More specifically, we have observed significant magnetoresistive effect in a spin-valve structure where the non-magnetic spacer sandwiched between two magnetic conducting materials is replaced by a self-assembled monolayer of organic molecules or a single molecule (like a carbon fullerene). The diode behavior in donor(D)-bridge(B)-acceptor(A) type of single molecules is then discussed and a unimolecular transistor is designed. Lastly, we have proposed and primarily tested the idea of using functionalized electrodes for rapid nanopore DNA sequencing. In these studies, the fundamental roles of molecules and molecule-electrode interfaces on quantum electron transport have been investigated based on first-principles calculations of the electronic structure. Both the intrinsic properties of molecules themselves and the detailed interfacial features are found to play critical roles in electron transport at the molecular scale. The flexibility and tailorability of the properties of molecules have opened great opportunity in a purpose-driven design of electronic devices from the bottom up. The results that we gained from this work have helped in understanding the underlying physics, developing the fundamental mechanism and providing guidance for future experimental efforts.
Resumo:
Using ultracold alkaline-earth atoms in optical lattices, we construct a quantum simulator for U(N) and SU(N) lattice gauge theories with fermionic matter based on quantum link models. These systems share qualitative features with QCD, including chiral symmetry breaking and restoration at nonzero temperature or baryon density. Unlike classical simulations, a quantum simulator does not suffer from sign problems and can address the corresponding chiral dynamics in real time.
Resumo:
Abelian and non-Abelian gauge theories are of central importance in many areas of physics. In condensed matter physics, AbelianU(1) lattice gauge theories arise in the description of certain quantum spin liquids. In quantum information theory, Kitaev’s toric code is a Z(2) lattice gauge theory. In particle physics, Quantum Chromodynamics (QCD), the non-Abelian SU(3) gauge theory of the strong interactions between quarks and gluons, is nonperturbatively regularized on a lattice. Quantum link models extend the concept of lattice gauge theories beyond the Wilson formulation, and are well suited for both digital and analog quantum simulation using ultracold atomic gases in optical lattices. Since quantum simulators do not suffer from the notorious sign problem, they open the door to studies of the real-time evolution of strongly coupled quantum systems, which are impossible with classical simulation methods. A plethora of interesting lattice gauge theories suggests itself for quantum simulation, which should allow us to address very challenging problems, ranging from confinement and deconfinement, or chiral symmetry breaking and its restoration at finite baryon density, to color superconductivity and the real-time evolution of heavy-ion collisions, first in simpler model gauge theories and ultimately in QCD.
Resumo:
With this study, we investigate the mineralogical variations associated with the low-temperature (<100°C) alteration of normal tholeiitic pillow basalts varying in age from 0.8 to 3.5 Ma. Their alteration intensity varies systematically and is related to several factors, including (1) the aging of the igneous crust, (2) the increase of temperatures from the younger to the older sites, measured at the sediment/basement interface, (3) the local and regional variations in lithology and primary porosity, and (4) the degree of pillow fracturing. Fractures represent the most important pathways that allow significant penetration of fluids into the rock and are virtually the only factor controlling the alteration of the glassy rim and the early stages of pillow alteration. Three different alteration stages have been recognized: alteration of glassy margin, oxidizing alteration through fluid circulation in fracture systems, and reducing alteration through diffusion. All the observed mineralogical and chemical variations occurring during the early stages of alteration are interpreted as the result of the rock interaction with "normal," alkaline, and oxidizing seawater, along preferential pathways represented by the concentric and radial crack systems. The chemical composition of the fluid progressively evolves while moving into the basalt, leading to a reducing alteration stage, which is initially responsible for the precipitation of Fe-rich saponite and minor sulfides and subsequently for the widespread formation of carbonates. At the same time, the system evolved from being "water dominated" to being "rock dominated." No alteration effects in pillow basalts were observed that must have occurred at temperatures higher than those measured during Leg 168 at the basement/sediment interface (e.g., between 15° and 64°C).
Resumo:
La computación basada en servicios (Service-Oriented Computing, SOC) se estableció como un paradigma ampliamente aceptado para el desarollo de sistemas de software flexibles, distribuidos y adaptables, donde las composiciones de los servicios realizan las tareas más complejas o de nivel más alto, frecuentemente tareas inter-organizativas usando los servicios atómicos u otras composiciones de servicios. En tales sistemas, las propriedades de la calidad de servicio (Quality of Service, QoS), como la rapídez de procesamiento, coste, disponibilidad o seguridad, son críticas para la usabilidad de los servicios o sus composiciones en cualquier aplicación concreta. El análisis de estas propriedades se puede realizarse de una forma más precisa y rica en información si se utilizan las técnicas de análisis de programas, como el análisis de complejidad o de compartición de datos, que son capables de analizar simultáneamente tanto las estructuras de control como las de datos, dependencias y operaciones en una composición. El análisis de coste computacional para la composicion de servicios puede ayudar a una monitorización predictiva así como a una adaptación proactiva a través de una inferencia automática de coste computacional, usando los limites altos y bajos como funciones del valor o del tamaño de los mensajes de entrada. Tales funciones de coste se pueden usar para adaptación en la forma de selección de los candidatos entre los servicios que minimizan el coste total de la composición, basado en los datos reales que se pasan al servicio. Las funciones de coste también pueden ser combinadas con los parámetros extraídos empíricamente desde la infraestructura, para producir las funciones de los límites de QoS sobre los datos de entrada, cuales se pueden usar para previsar, en el momento de invocación, las violaciones de los compromisos al nivel de servicios (Service Level Agreements, SLA) potenciales or inminentes. En las composiciones críticas, una previsión continua de QoS bastante eficaz y precisa se puede basar en el modelado con restricciones de QoS desde la estructura de la composition, datos empiricos en tiempo de ejecución y (cuando estén disponibles) los resultados del análisis de complejidad. Este enfoque se puede aplicar a las orquestaciones de servicios con un control centralizado del flujo, así como a las coreografías con participantes multiples, siguiendo unas interacciones complejas que modifican su estado. El análisis del compartición de datos puede servir de apoyo para acciones de adaptación, como la paralelización, fragmentación y selección de los componentes, las cuales son basadas en dependencias funcionales y en el contenido de información en los mensajes, datos internos y las actividades de la composición, cuando se usan construcciones de control complejas, como bucles, bifurcaciones y flujos anidados. Tanto las dependencias funcionales como el contenido de información (descrito a través de algunos atributos definidos por el usuario) se pueden expresar usando una representación basada en la lógica de primer orden (claúsulas de Horn), y los resultados del análisis se pueden interpretar como modelos conceptuales basados en retículos. ABSTRACT Service-Oriented Computing (SOC) is a widely accepted paradigm for development of flexible, distributed and adaptable software systems, in which service compositions perform more complex, higher-level, often cross-organizational tasks using atomic services or other service compositions. In such systems, Quality of Service (QoS) properties, such as the performance, cost, availability or security, are critical for the usability of services and their compositions in concrete applications. Analysis of these properties can become more precise and richer in information, if it employs program analysis techniques, such as the complexity and sharing analyses, which are able to simultaneously take into account both the control and the data structures, dependencies, and operations in a composition. Computation cost analysis for service composition can support predictive monitoring and proactive adaptation by automatically inferring computation cost using the upper and lower bound functions of value or size of input messages. These cost functions can be used for adaptation by selecting service candidates that minimize total cost of the composition, based on the actual data that is passed to them. The cost functions can also be combined with the empirically collected infrastructural parameters to produce QoS bounds functions of input data that can be used to predict potential or imminent Service Level Agreement (SLA) violations at the moment of invocation. In mission-critical applications, an effective and accurate continuous QoS prediction, based on continuations, can be achieved by constraint modeling of composition QoS based on its structure, known data at runtime, and (when available) the results of complexity analysis. This approach can be applied to service orchestrations with centralized flow control, and choreographies with multiple participants with complex stateful interactions. Sharing analysis can support adaptation actions, such as parallelization, fragmentation, and component selection, which are based on functional dependencies and information content of the composition messages, internal data, and activities, in presence of complex control constructs, such as loops, branches, and sub-workflows. Both the functional dependencies and the information content (described using user-defined attributes) can be expressed using a first-order logic (Horn clause) representation, and the analysis results can be interpreted as a lattice-based conceptual models.
Resumo:
Los hipergrafos dirigidos se han empleado en problemas relacionados con lógica proposicional, bases de datos relacionales, linguística computacional y aprendizaje automático. Los hipergrafos dirigidos han sido también utilizados como alternativa a los grafos (bipartitos) dirigidos para facilitar el estudio de las interacciones entre componentes de sistemas complejos que no pueden ser fácilmente modelados usando exclusivamente relaciones binarias. En este contexto, este tipo de representación es conocida como hiper-redes. Un hipergrafo dirigido es una generalización de un grafo dirigido especialmente adecuado para la representación de relaciones de muchos a muchos. Mientras que una arista en un grafo dirigido define una relación entre dos de sus nodos, una hiperarista en un hipergrafo dirigido define una relación entre dos conjuntos de sus nodos. La conexión fuerte es una relación de equivalencia que divide el conjunto de nodos de un hipergrafo dirigido en particiones y cada partición define una clase de equivalencia conocida como componente fuertemente conexo. El estudio de los componentes fuertemente conexos de un hipergrafo dirigido puede ayudar a conseguir una mejor comprensión de la estructura de este tipo de hipergrafos cuando su tamaño es considerable. En el caso de grafo dirigidos, existen algoritmos muy eficientes para el cálculo de los componentes fuertemente conexos en grafos de gran tamaño. Gracias a estos algoritmos, se ha podido averiguar que la estructura de la WWW tiene forma de “pajarita”, donde más del 70% del los nodos están distribuidos en tres grandes conjuntos y uno de ellos es un componente fuertemente conexo. Este tipo de estructura ha sido también observada en redes complejas en otras áreas como la biología. Estudios de naturaleza similar no han podido ser realizados en hipergrafos dirigidos porque no existe algoritmos capaces de calcular los componentes fuertemente conexos de este tipo de hipergrafos. En esta tesis doctoral, hemos investigado como calcular los componentes fuertemente conexos de un hipergrafo dirigido. En concreto, hemos desarrollado dos algoritmos para este problema y hemos determinado que son correctos y cuál es su complejidad computacional. Ambos algoritmos han sido evaluados empíricamente para comparar sus tiempos de ejecución. Para la evaluación, hemos producido una selección de hipergrafos dirigidos generados de forma aleatoria inspirados en modelos muy conocidos de grafos aleatorios como Erdos-Renyi, Newman-Watts-Strogatz and Barabasi-Albert. Varias optimizaciones para ambos algoritmos han sido implementadas y analizadas en la tesis. En concreto, colapsar los componentes fuertemente conexos del grafo dirigido que se puede construir eliminando ciertas hiperaristas complejas del hipergrafo dirigido original, mejora notablemente los tiempos de ejecucion de los algoritmos para varios de los hipergrafos utilizados en la evaluación. Aparte de los ejemplos de aplicación mencionados anteriormente, los hipergrafos dirigidos han sido también empleados en el área de representación de conocimiento. En concreto, este tipo de hipergrafos se han usado para el cálculo de módulos de ontologías. Una ontología puede ser definida como un conjunto de axiomas que especifican formalmente un conjunto de símbolos y sus relaciones, mientras que un modulo puede ser entendido como un subconjunto de axiomas de la ontología que recoge todo el conocimiento que almacena la ontología sobre un conjunto especifico de símbolos y sus relaciones. En la tesis nos hemos centrado solamente en módulos que han sido calculados usando la técnica de localidad sintáctica. Debido a que las ontologías pueden ser muy grandes, el cálculo de módulos puede facilitar las tareas de re-utilización y mantenimiento de dichas ontologías. Sin embargo, analizar todos los posibles módulos de una ontología es, en general, muy costoso porque el numero de módulos crece de forma exponencial con respecto al número de símbolos y de axiomas de la ontología. Afortunadamente, los axiomas de una ontología pueden ser divididos en particiones conocidas como átomos. Cada átomo representa un conjunto máximo de axiomas que siempre aparecen juntos en un modulo. La decomposición atómica de una ontología es definida como un grafo dirigido de tal forma que cada nodo del grafo corresponde con un átomo y cada arista define una dependencia entre una pareja de átomos. En esta tesis introducimos el concepto de“axiom dependency hypergraph” que generaliza el concepto de descomposición atómica de una ontología. Un modulo en una ontología correspondería con un componente conexo en este tipo de hipergrafos y un átomo de una ontología con un componente fuertemente conexo. Hemos adaptado la implementación de nuestros algoritmos para que funcionen también con axiom dependency hypergraphs y poder de esa forma calcular los átomos de una ontología. Para demostrar la viabilidad de esta idea, hemos incorporado nuestros algoritmos en una aplicación que hemos desarrollado para la extracción de módulos y la descomposición atómica de ontologías. A la aplicación la hemos llamado HyS y hemos estudiado sus tiempos de ejecución usando una selección de ontologías muy conocidas del área biomédica, la mayoría disponibles en el portal de Internet NCBO. Los resultados de la evaluación muestran que los tiempos de ejecución de HyS son mucho mejores que las aplicaciones más rápidas conocidas. ABSTRACT Directed hypergraphs are an intuitive modelling formalism that have been used in problems related to propositional logic, relational databases, computational linguistic and machine learning. Directed hypergraphs are also presented as an alternative to directed (bipartite) graphs to facilitate the study of the interactions between components of complex systems that cannot naturally be modelled as binary relations. In this context, they are known as hyper-networks. A directed hypergraph is a generalization of a directed graph suitable for representing many-to-many relationships. While an edge in a directed graph defines a relation between two nodes of the graph, a hyperedge in a directed hypergraph defines a relation between two sets of nodes. Strong-connectivity is an equivalence relation that induces a partition of the set of nodes of a directed hypergraph into strongly-connected components. These components can be collapsed into single nodes. As result, the size of the original hypergraph can significantly be reduced if the strongly-connected components have many nodes. This approach might contribute to better understand how the nodes of a hypergraph are connected, in particular when the hypergraphs are large. In the case of directed graphs, there are efficient algorithms that can be used to compute the strongly-connected components of large graphs. For instance, it has been shown that the macroscopic structure of the World Wide Web can be represented as a “bow-tie” diagram where more than 70% of the nodes are distributed into three large sets and one of these sets is a large strongly-connected component. This particular structure has been also observed in complex networks in other fields such as, e.g., biology. Similar studies cannot be conducted in a directed hypergraph because there does not exist any algorithm for computing the strongly-connected components of the hypergraph. In this thesis, we investigate ways to compute the strongly-connected components of directed hypergraphs. We present two new algorithms and we show their correctness and computational complexity. One of these algorithms is inspired by Tarjan’s algorithm for directed graphs. The second algorithm follows a simple approach to compute the stronglyconnected components. This approach is based on the fact that two nodes of a graph that are strongly-connected can also reach the same nodes. In other words, the connected component of each node is the same. Both algorithms are empirically evaluated to compare their performances. To this end, we have produced a selection of random directed hypergraphs inspired by existent and well-known random graphs models like Erd˝os-Renyi and Newman-Watts-Strogatz. Besides the application examples that we mentioned earlier, directed hypergraphs have also been employed in the field of knowledge representation. In particular, they have been used to compute the modules of an ontology. An ontology is defined as a collection of axioms that provides a formal specification of a set of terms and their relationships; and a module is a subset of an ontology that completely captures the meaning of certain terms as defined in the ontology. In particular, we focus on the modules computed using the notion of syntactic locality. As ontologies can be very large, the computation of modules facilitates the reuse and maintenance of these ontologies. Analysing all modules of an ontology, however, is in general not feasible as the number of modules grows exponentially in the number of terms and axioms of the ontology. Nevertheless, the modules can succinctly be represented using the Atomic Decomposition of an ontology. Using this representation, an ontology can be partitioned into atoms, which are maximal sets of axioms that co-occur in every module. The Atomic Decomposition is then defined as a directed graph such that each node correspond to an atom and each edge represents a dependency relation between two atoms. In this thesis, we introduce the notion of an axiom dependency hypergraph which is a generalization of the atomic decomposition of an ontology. A module in the ontology corresponds to a connected component in the hypergraph, and the atoms of the ontology to the strongly-connected components. We apply our algorithms for directed hypergraphs to axiom dependency hypergraphs and in this manner, we compute the atoms of an ontology. To demonstrate the viability of this approach, we have implemented the algorithms in the application HyS which computes the modules of ontologies and calculate their atomic decomposition. In the thesis, we provide an experimental evaluation of HyS with a selection of large and prominent biomedical ontologies, most of which are available in the NCBO Bioportal. HyS outperforms state-of-the-art implementations in the tasks of extracting modules and computing the atomic decomposition of these ontologies.
Resumo:
Symmetry is commonly observed in many biological systems. Here we discuss representative examples of the role of symmetry in structural molecular biology. Point group symmetries are observed in many protein oligomers whose three-dimensional atomic structures have been elucidated by x-ray crystallography. Approximate symmetry also occurs in multidomain proteins. Symmetry often confers stability on the molecular system and results in economical usage of basic components to build the macromolecular structure. Symmetry is also associated with cooperativity. Mild perturbation from perfect symmetry may be essential in some systems for dynamic functions.