120 resultados para FORMALISMS
Resumo:
Piezoelectrics present an interactive electromechanical behaviour that, especially in recent years, has generated much interest since it renders these materials adapt for use in a variety of electronic and industrial applications like sensors, actuators, transducers, smart structures. Both mechanical and electric loads are generally applied on these devices and can cause high concentrations of stress, particularly in proximity of defects or inhomogeneities, such as flaws, cavities or included particles. A thorough understanding of their fracture behaviour is crucial in order to improve their performances and avoid unexpected failures. Therefore, a considerable number of research works have addressed this topic in the last decades. Most of the theoretical studies on this subject find their analytical background in the complex variable formulation of plane anisotropic elasticity. This theoretical approach bases its main origins in the pioneering works of Muskelishvili and Lekhnitskii who obtained the solution of the elastic problem in terms of independent analytic functions of complex variables. In the present work, the expressions of stresses and elastic and electric displacements are obtained as functions of complex potentials through an analytical formulation which is the application to the piezoelectric static case of an approach introduced for orthotropic materials to solve elastodynamics problems. This method can be considered an alternative to other formalisms currently used, like the Stroh’s formalism. The equilibrium equations are reduced to a first order system involving a six-dimensional vector field. After that, a similarity transformation is induced to reach three independent Cauchy-Riemann systems, so justifying the introduction of the complex variable notation. Closed form expressions of near tip stress and displacement fields are therefore obtained. In the theoretical study of cracked piezoelectric bodies, the issue of assigning consistent electric boundary conditions on the crack faces is of central importance and has been addressed by many researchers. Three different boundary conditions are commonly accepted in literature: the permeable, the impermeable and the semipermeable (“exact”) crack model. This thesis takes into considerations all the three models, comparing the results obtained and analysing the effects of the boundary condition choice on the solution. The influence of load biaxiality and of the application of a remote electric field has been studied, pointing out that both can affect to a various extent the stress fields and the angle of initial crack extension, especially when non-singular terms are retained in the expressions of the electro-elastic solution. Furthermore, two different fracture criteria are applied to the piezoelectric case, and their outcomes are compared and discussed. The work is organized as follows: Chapter 1 briefly introduces the fundamental concepts of Fracture Mechanics. Chapter 2 describes plane elasticity formalisms for an anisotropic continuum (Eshelby-Read-Shockley and Stroh) and introduces for the simplified orthotropic case the alternative formalism we want to propose. Chapter 3 outlines the Linear Theory of Piezoelectricity, its basic relations and electro-elastic equations. Chapter 4 introduces the proposed method for obtaining the expressions of stresses and elastic and electric displacements, given as functions of complex potentials. The solution is obtained in close form and non-singular terms are retained as well. Chapter 5 presents several numerical applications aimed at estimating the effect of load biaxiality, electric field, considered permittivity of the crack. Through the application of fracture criteria the influence of the above listed conditions on the response of the system and in particular on the direction of crack branching is thoroughly discussed.
Resumo:
This work is concerned with the increasing relationships between two distinct multidisciplinary research fields, Semantic Web technologies and scholarly publishing, that in this context converge into one precise research topic: Semantic Publishing. In the spirit of the original aim of Semantic Publishing, i.e. the improvement of scientific communication by means of semantic technologies, this thesis proposes theories, formalisms and applications for opening up semantic publishing to an effective interaction between scholarly documents (e.g., journal articles) and their related semantic and formal descriptions. In fact, the main aim of this work is to increase the users' comprehension of documents and to allow document enrichment, discovery and linkage to document-related resources and contexts, such as other articles and raw scientific data. In order to achieve these goals, this thesis investigates and proposes solutions for three of the main issues that semantic publishing promises to address, namely: the need of tools for linking document text to a formal representation of its meaning, the lack of complete metadata schemas for describing documents according to the publishing vocabulary, and absence of effective user interfaces for easily acting on semantic publishing models and theories.
Resumo:
Aerosolpartikel beeinflussen das Klima durch Streuung und Absorption von Strahlung sowie als Nukleations-Kerne für Wolkentröpfchen und Eiskristalle. Darüber hinaus haben Aerosole einen starken Einfluss auf die Luftverschmutzung und die öffentliche Gesundheit. Gas-Partikel-Wechselwirkunge sind wichtige Prozesse, weil sie die physikalischen und chemischen Eigenschaften von Aerosolen wie Toxizität, Reaktivität, Hygroskopizität und optische Eigenschaften beeinflussen. Durch einen Mangel an experimentellen Daten und universellen Modellformalismen sind jedoch die Mechanismen und die Kinetik der Gasaufnahme und der chemischen Transformation organischer Aerosolpartikel unzureichend erfasst. Sowohl die chemische Transformation als auch die negativen gesundheitlichen Auswirkungen von toxischen und allergenen Aerosolpartikeln, wie Ruß, polyzyklische aromatische Kohlenwasserstoffe (PAK) und Proteine, sind bislang nicht gut verstanden.rn Kinetische Fluss-Modelle für Aerosoloberflächen- und Partikelbulk-Chemie wurden auf Basis des Pöschl-Rudich-Ammann-Formalismus für Gas-Partikel-Wechselwirkungen entwickelt. Zunächst wurde das kinetische Doppelschicht-Oberflächenmodell K2-SURF entwickelt, welches den Abbau von PAK auf Aerosolpartikeln in Gegenwart von Ozon, Stickstoffdioxid, Wasserdampf, Hydroxyl- und Nitrat-Radikalen beschreibt. Kompetitive Adsorption und chemische Transformation der Oberfläche führen zu einer stark nicht-linearen Abhängigkeit der Ozon-Aufnahme bezüglich Gaszusammensetzung. Unter atmosphärischen Bedingungen reicht die chemische Lebensdauer von PAK von wenigen Minuten auf Ruß, über mehrere Stunden auf organischen und anorganischen Feststoffen bis hin zu Tagen auf flüssigen Partikeln. rn Anschließend wurde das kinetische Mehrschichtenmodell KM-SUB entwickelt um die chemische Transformation organischer Aerosolpartikel zu beschreiben. KM-SUB ist in der Lage, Transportprozesse und chemische Reaktionen an der Oberfläche und im Bulk von Aerosol-partikeln explizit aufzulösen. Es erforder im Gegensatz zu früheren Modellen keine vereinfachenden Annahmen über stationäre Zustände und radiale Durchmischung. In Kombination mit Literaturdaten und neuen experimentellen Ergebnissen wurde KM-SUB eingesetzt, um die Effekte von Grenzflächen- und Bulk-Transportprozessen auf die Ozonolyse und Nitrierung von Protein-Makromolekülen, Ölsäure, und verwandten organischen Ver¬bin-dungen aufzuklären. Die in dieser Studie entwickelten kinetischen Modelle sollen als Basis für die Entwicklung eines detaillierten Mechanismus für Aerosolchemie dienen sowie für das Herleiten von vereinfachten, jedoch realistischen Parametrisierungen für großskalige globale Atmosphären- und Klima-Modelle. rn Die in dieser Studie durchgeführten Experimente und Modellrechnungen liefern Beweise für die Bildung langlebiger reaktiver Sauerstoff-Intermediate (ROI) in der heterogenen Reaktion von Ozon mit Aerosolpartikeln. Die chemische Lebensdauer dieser Zwischenformen beträgt mehr als 100 s, deutlich länger als die Oberflächen-Verweilzeit von molekularem O3 (~10-9 s). Die ROIs erklären scheinbare Diskrepanzen zwischen früheren quantenmechanischen Berechnungen und kinetischen Experimenten. Sie spielen eine Schlüsselrolle in der chemischen Transformation sowie in den negativen Gesundheitseffekten von toxischen und allergenen Feinstaubkomponenten, wie Ruß, PAK und Proteine. ROIs sind vermutlich auch an der Zersetzung von Ozon auf mineralischem Staub und an der Bildung sowie am Wachstum von sekundären organischen Aerosolen beteiligt. Darüber hinaus bilden ROIs eine Verbindung zwischen atmosphärischen und biosphärischen Mehrphasenprozessen (chemische und biologische Alterung).rn Organische Verbindungen können als amorpher Feststoff oder in einem halbfesten Zustand vorliegen, der die Geschwindigkeit von heterogenen Reaktionenen und Mehrphasenprozessen in Aerosolen beeinflusst. Strömungsrohr-Experimente zeigen, dass die Ozonaufnahme und die oxidative Alterung von amorphen Proteinen durch Bulk-Diffusion kinetisch limitiert sind. Die reaktive Gasaufnahme zeigt eine deutliche Zunahme mit zunehmender Luftfeuchte, was durch eine Verringerung der Viskosität zu erklären ist, bedingt durch einen Phasenübergang der amorphen organischen Matrix von einem glasartigen zu einem halbfesten Zustand (feuchtigkeitsinduzierter Phasenübergang). Die chemische Lebensdauer reaktiver Verbindungen in organischen Partikeln kann von Sekunden bis zu Tagen ansteigen, da die Diffusionsrate in der halbfesten Phase bei niedriger Temperatur oder geringer Luftfeuchte um Größenordnungen absinken kann. Die Ergebnisse dieser Studie zeigen wie halbfeste Phasen die Auswirkung organischeer Aerosole auf Luftqualität, Gesundheit und Klima beeinflussen können. rn
Resumo:
Hydrodynamics can be consistently formulated on surfaces of arbitrary co-dimension in a background space-time, providing the effective theory describing long-wavelength perturbations of black branes. When the co-dimension is non-zero, the system acquires fluid-elastic properties and constitutes what is called a fluid brane. Applying an effective action approach, the most general form of the free energy quadratic in the extrinsic curvature and extrinsic twist potential of stationary fluid brane configurations is constructed to second order in a derivative expansion. This construction generalizes the Helfrich-Canham bending energy for fluid membranes studied in theoretical biology to the case in which the fluid is rotating. It is found that stationary fluid brane configurations are characterized by a set of 3 elastic response coefficients, 3 hydrodynamic response coefficients and 1 spin response coefficient for co-dimension greater than one. Moreover, the elastic degrees of freedom present in the system are coupled to the hydrodynamic degrees of freedom. For co-dimension-1 surfaces we find a 8 independent parameter family of stationary fluid branes. It is further shown that elastic and spin corrections to (non)-extremal brane effective actions can be accounted for by a multipole expansion of the stress-energy tensor, therefore establishing a relation between the different formalisms of Carter, Capovilla-Guven and Vasilic-Vojinovic and between gravity and the effective description of stationary fluid branes. Finally, it is shown that the Young modulus found in the literature for black branes falls into the class predicted by this approach - a relation which is then used to make a proposal for the second order effective action of stationary blackfolds and to find the corrected horizon angular velocity of thin black rings.
Resumo:
This bipartite comparative study aims at inspecting the similarities and differences between the Jones and Stokes–Mueller formalisms when modeling polarized light propagation with numerical simulations of the Monte Carlo type. In this first part, we review the theoretical concepts that concern light propagation and detection with both pure and partially/totally unpolarized states. The latter case involving fluctuations, or “depolarizing effects,” is of special interest here: Jones and Stokes–Mueller are equally apt to model such effects and are expected to yield identical results. In a second, ensuing paper, empirical evidence is provided by means of numerical experiments, using both formalisms.
Resumo:
The main method of proving the Craig Interpolation Property (CIP) constructively uses cut-free sequent proof systems. Until now, however, no such method has been known for proving the CIP using more general sequent-like proof formalisms, such as hypersequents, nested sequents, and labelled sequents. In this paper, we start closing this gap by presenting an algorithm for proving the CIP for modal logics by induction on a nested-sequent derivation. This algorithm is applied to all the logics of the so-called modal cube.
Resumo:
Esta Tesis aborda el diseño e implementación de aplicaciones en el campo de procesado de señal, utilizando como plataforma los dispositivos reconfigurables FPGA. Esta plataforma muestra una alta capacidad de lógica, e incorpora elementos orientados al procesado de señal, que unido a su relativamente bajo coste, la hacen ideal para el desarrollo de aplicaciones de procesado de señal cuando se requiere realizar un procesado intensivo y se buscan unas altas prestaciones. Sin embargo, el coste asociado al desarrollo en estas plataformas es elevado. Mientras que el aumento en la capacidad lógica de los dispositivos FPGA permite el desarrollo de sistemas completos, los requisitos de altas prestaciones obligan a que en muchas ocasiones se deban optimizar operadores a muy bajo nivel. Además de las restricciones temporales que imponen este tipo de aplicaciones, también tienen asociadas restricciones de área asociadas al dispositivo, lo que obliga a evaluar y verificar entre diferentes alternativas de implementación. El ciclo de diseño e implementación para estas aplicaciones se puede prolongar tanto, que es normal que aparezcan nuevos modelos de FPGA, con mayor capacidad y mayor velocidad, antes de completar el sistema, y que hagan a las restricciones utilizadas para el diseño del sistema inútiles. Para mejorar la productividad en el desarrollo de estas aplicaciones, y con ello acortar su ciclo de diseño, se pueden encontrar diferentes métodos. Esta Tesis se centra en la reutilización de componentes hardware previamente diseñados y verificados. Aunque los lenguajes HDL convencionales permiten reutilizar componentes ya definidos, se pueden realizar mejoras en la especificación que simplifiquen el proceso de incorporar componentes a nuevos diseños. Así, una primera parte de la Tesis se orientará a la especificación de diseños basada en componentes predefinidos. Esta especificación no sólo busca mejorar y simplificar el proceso de añadir componentes a una descripción, sino que también busca mejorar la calidad del diseño especificado, ofreciendo una mayor posibilidad de configuración e incluso la posibilidad de informar de características de la propia descripción. Reutilizar una componente ya descrito depende en gran medida de la información que se ofrezca para su integración en un sistema. En este sentido los HDLs convencionales únicamente proporcionan junto con la descripción del componente la interfaz de entrada/ salida y un conjunto de parámetros para su configuración, mientras que el resto de información requerida normalmente se acompaña mediante documentación externa. En la segunda parte de la Tesis se propondrán un conjunto de encapsulados cuya finalidad es incorporar junto con la propia descripción del componente, información que puede resultar útil para su integración en otros diseños. Incluyendo información de la implementación, ayuda a la configuración del componente, e incluso información de cómo configurar y conectar al componente para realizar una función. Finalmente se elegirá una aplicación clásica en el campo de procesado de señal, la transformada rápida de Fourier (FFT), y se utilizará como ejemplo de uso y aplicación, tanto de las posibilidades de especificación como de los encapsulados descritos. El objetivo del diseño realizado no sólo mostrará ejemplos de la especificación propuesta, sino que también se buscará obtener una implementación de calidad comparable con resultados de la literatura. Para ello, el diseño realizado se orientará a su implementación en FPGA, aprovechando tanto los elementos lógicos generalistas como elementos específicos de bajo nivel disponibles en estos dispositivos. Finalmente, la especificación de la FFT obtenida se utilizará para mostrar cómo incorporar en su interfaz información que ayude para su selección y configuración desde fases tempranas del ciclo de diseño. Abstract This PhD. thesis addresses the design and implementation of signal processing applications using reconfigurable FPGA platforms. This kind of platform exhibits high logic capability, incorporates dedicated signal processing elements and provides a low cost solution, which makes it ideal for the development of signal processing applications, where intensive data processing is required in order to obtain high performance. However, the cost associated to the hardware development on these platforms is high. While the increase in logic capacity of FPGA devices allows the development of complete systems, high-performance constraints require the optimization of operators at very low level. In addition to time constraints imposed by these applications, Area constraints are also applied related to the particular device, which force to evaluate and verify a design among different implementation alternatives. The design and implementation cycle for these applications can be tedious and long, being therefore normal that new FPGA models with a greater capacity and higher speed appear before completing the system implementation. Thus, the original constraints which guided the design of the system become useless. Different methods can be used to improve the productivity when developing these applications, and consequently shorten their design cycle. This PhD. Thesis focuses on the reuse of hardware components previously designed and verified. Although conventional HDLs allow the reuse of components already defined, their specification can be improved in order to simplify the process of incorporating new design components. Thus, a first part of the PhD. Thesis will focus on the specification of designs based on predefined components. This specification improves and simplifies the process of adding components to a description, but it also seeks to improve the quality of the design specified with better configuration options and even offering to report on features of the description. Hardware reuse of a component for its integration into a system largely depends on the information it offers. In this sense the conventional HDLs only provide together with the component description, the input/output interface and a set of parameters for its configuration, while other information is usually provided by external documentation. In the second part of the Thesis we will propose a formal way of encapsulation which aims to incorporate with the component description information that can be useful for its integration into other designs. This information will include features of the own implementation, but it will also support component configuration, and even information on how to configure and connect the component to carry out a function. Finally, the fast Fourier transform (FFT) will be chosen as a well-known signal processing application. It will be used as case study to illustrate the possibilities of proposed specification and encapsulation formalisms. The objective of the FFT design is not only to show practical examples of the proposed specification, but also to obtain an implementation of a quality comparable to scientific literature results. The design will focus its implementation on FPGA platforms, using general logic elements as base of the implementation, but also taking advantage of low-level specific elements available on these devices. Last, the specification of the obtained FFT will be used to show how to incorporate in its interface information to assist in the selection and configuration process early in the design cycle.
Resumo:
Esta tesis tiene por objeto estudiar las posibilidades de realizar en castellano tareas relativas a la resolución de problemas con sistemas basados en el conocimiento. En los dos primeros capítulos se plantea un análisis de la trayectoria seguida por las técnicas de tratamiento del lenguaje natural, prestando especial interés a los formalismos lógicos para la comprensión del lenguaje. Seguidamente, se plantea una valoración de la situación actual de los sistemas de tratamiento del lenguaje natural. Finalmente, se presenta lo que constituye el núcleo de este trabajo, un sistema llamado Sirena, que permite realizar tareas de adquisición, comprensión, recuperación y explicación de conocimiento en castellano con sistemas basados en el conocimiento. Este sistema contiene un subconjunto del castellano amplio pero simple formalizado con una gramática lógica. El significado del conocimiento se basa en la lógica y ha sido implementado en el lenguaje de programación lógica Prolog II vS. Palabras clave: Programación Lógica, Comprensión del Lenguaje Natural, Resolución de Problemas, Gramáticas Lógicas, Lingüistica Computacional, Inteligencia Artificial.---ABSTRACT---The purpose of this thesis is to study the possibi1 ities of performing in Spanish problem solving tasks with knowledge based systems. Ule study the development of the techniques for natural language processing with a particular interest in the logical formalisms that have been used to understand natural languages. Then, we present an evaluation of the current state of art in the field of natural language processing systems. Finally, we introduce the main contribution of our work, Sirena a system that allows the adquisition, understanding, retrieval and explanation of knowledge in Spanish with knowledge based systems. Sirena can deal with a large, although simple» subset of Spanish. This subset has been formalised by means of a logic grammar and the meaning of knowledge is based on logic. Sirena has been implemented in the programming language Prolog II v2. Keywords: Logic Programming, Understanding Natural Language, Problem Solving, Logic Grammars, Cumputational Linguistic, Artificial Intelligence.
Resumo:
La tesis que se presenta tiene como propósito la construcción automática de ontologías a partir de textos, enmarcándose en el área denominada Ontology Learning. Esta disciplina tiene como objetivo automatizar la elaboración de modelos de dominio a partir de fuentes información estructurada o no estructurada, y tuvo su origen con el comienzo del milenio, a raíz del crecimiento exponencial del volumen de información accesible en Internet. Debido a que la mayoría de información se presenta en la web en forma de texto, el aprendizaje automático de ontologías se ha centrado en el análisis de este tipo de fuente, nutriéndose a lo largo de los años de técnicas muy diversas provenientes de áreas como la Recuperación de Información, Extracción de Información, Sumarización y, en general, de áreas relacionadas con el procesamiento del lenguaje natural. La principal contribución de esta tesis consiste en que, a diferencia de la mayoría de las técnicas actuales, el método que se propone no analiza la estructura sintáctica superficial del lenguaje, sino que estudia su nivel semántico profundo. Su objetivo, por tanto, es tratar de deducir el modelo del dominio a partir de la forma con la que se articulan los significados de las oraciones en lenguaje natural. Debido a que el nivel semántico profundo es independiente de la lengua, el método permitirá operar en escenarios multilingües, en los que es necesario combinar información proveniente de textos en diferentes idiomas. Para acceder a este nivel del lenguaje, el método utiliza el modelo de las interlinguas. Estos formalismos, provenientes del área de la traducción automática, permiten representar el significado de las oraciones de forma independiente de la lengua. Se utilizará en concreto UNL (Universal Networking Language), considerado como la única interlingua de propósito general que está normalizada. La aproximación utilizada en esta tesis supone la continuación de trabajos previos realizados tanto por su autor como por el equipo de investigación del que forma parte, en los que se estudió cómo utilizar el modelo de las interlinguas en las áreas de extracción y recuperación de información multilingüe. Básicamente, el procedimiento definido en el método trata de identificar, en la representación UNL de los textos, ciertas regularidades que permiten deducir las piezas de la ontología del dominio. Debido a que UNL es un formalismo basado en redes semánticas, estas regularidades se presentan en forma de grafos, generalizándose en estructuras denominadas patrones lingüísticos. Por otra parte, UNL aún conserva ciertos mecanismos de cohesión del discurso procedentes de los lenguajes naturales, como el fenómeno de la anáfora. Con el fin de aumentar la efectividad en la comprensión de las expresiones, el método provee, como otra contribución relevante, la definición de un algoritmo para la resolución de la anáfora pronominal circunscrita al modelo de la interlingua, limitada al caso de pronombres personales de tercera persona cuando su antecedente es un nombre propio. El método propuesto se sustenta en la definición de un marco formal, que ha debido elaborarse adaptando ciertas definiciones provenientes de la teoría de grafos e incorporando otras nuevas, con el objetivo de ubicar las nociones de expresión UNL, patrón lingüístico y las operaciones de encaje de patrones, que son la base de los procesos del método. Tanto el marco formal como todos los procesos que define el método se han implementado con el fin de realizar la experimentación, aplicándose sobre un artículo de la colección EOLSS “Encyclopedia of Life Support Systems” de la UNESCO. ABSTRACT The purpose of this thesis is the automatic construction of ontologies from texts. This thesis is set within the area of Ontology Learning. This discipline aims to automatize domain models from structured or unstructured information sources, and had its origin with the beginning of the millennium, as a result of the exponential growth in the volume of information accessible on the Internet. Since most information is presented on the web in the form of text, the automatic ontology learning is focused on the analysis of this type of source, nourished over the years by very different techniques from areas such as Information Retrieval, Information Extraction, Summarization and, in general, by areas related to natural language processing. The main contribution of this thesis consists of, in contrast with the majority of current techniques, the fact that the method proposed does not analyze the syntactic surface structure of the language, but explores his deep semantic level. Its objective, therefore, is trying to infer the domain model from the way the meanings of the sentences are articulated in natural language. Since the deep semantic level does not depend on the language, the method will allow to operate in multilingual scenarios, where it is necessary to combine information from texts in different languages. To access to this level of the language, the method uses the interlingua model. These formalisms, coming from the area of machine translation, allow to represent the meaning of the sentences independently of the language. In this particular case, UNL (Universal Networking Language) will be used, which considered to be the only interlingua of general purpose that is standardized. The approach used in this thesis corresponds to the continuation of previous works carried out both by the author of this thesis and by the research group of which he is part, in which it is studied how to use the interlingua model in the areas of multilingual information extraction and retrieval. Basically, the procedure defined in the method tries to identify certain regularities at the UNL representation of texts that allow the deduction of the parts of the ontology of the domain. Since UNL is a formalism based on semantic networks, these regularities are presented in the form of graphs, generalizing in structures called linguistic patterns. On the other hand, UNL still preserves certain mechanisms of discourse cohesion from natural languages, such as the phenomenon of the anaphora. In order to increase the effectiveness in the understanding of expressions, the method provides, as another significant contribution, the definition of an algorithm for the resolution of pronominal anaphora limited to the model of the interlingua, in the case of third person personal pronouns when its antecedent is a proper noun. The proposed method is based on the definition of a formal framework, adapting some definitions from Graph Theory and incorporating new ones, in order to locate the notions of UNL expression and linguistic pattern, as well as the operations of pattern matching, which are the basis of the method processes. Both the formal framework and all the processes that define the method have been implemented in order to carry out the experimentation, applying on an article of the "Encyclopedia of Life Support Systems" of the UNESCO-EOLSS collection.
Resumo:
We present an approach for assessing the significance of sequence and structure comparisons by using nearly identical statistical formalisms for both sequence and structure. Doing so involves an all-vs.-all comparison of protein domains [taken here from the Structural Classification of Proteins (scop) database] and then fitting a simple distribution function to the observed scores. By using this distribution, we can attach a statistical significance to each comparison score in the form of a P value, the probability that a better score would occur by chance. As expected, we find that the scores for sequence matching follow an extreme-value distribution. The agreement, moreover, between the P values that we derive from this distribution and those reported by standard programs (e.g., blast and fasta validates our approach. Structure comparison scores also follow an extreme-value distribution when the statistics are expressed in terms of a structural alignment score (essentially the sum of reciprocated distances between aligned atoms minus gap penalties). We find that the traditional metric of structural similarity, the rms deviation in atom positions after fitting aligned atoms, follows a different distribution of scores and does not perform as well as the structural alignment score. Comparison of the sequence and structure statistics for pairs of proteins known to be related distantly shows that structural comparison is able to detect approximately twice as many distant relationships as sequence comparison at the same error rate. The comparison also indicates that there are very few pairs with significant similarity in terms of sequence but not structure whereas many pairs have significant similarity in terms of structure but not sequence.
Resumo:
In this position paper we propose a consistent and unifying view to all those basic knowledge representation models that are based on the existence of two somehow opposite fuzzy concepts. A number of these basic models can be found in fuzzy logic and multi-valued logic literature. Here it is claimed that it is the semantic relationship between two paired concepts what determines the emergence of different types of neutrality, namely indeterminacy, ambivalence and conflict, widely used under different frameworks (possibly under different names). It will be shown the potential relevance of paired structures, generated from two paired concepts together with their associated neutrality, all of them to be modeled as fuzzy sets. In this way, paired structures can be viewed as a standard basic model from which different models arise. This unifying view should therefore allow a deeper analysis of the relationships between several existing knowledge representation formalisms, providing a basis from which more expressive models can be later developed.
Resumo:
Multibody System Dynamics has been responsible for revolutionizing Mechanical Engineering Design by using mathematical models to simulate and optimize the dynamic behavior of a wide range of mechanical systems. These mathematical models not only can provide valuable informations about a system that could otherwise be obtained only by experiments with prototypes, but also have been responsible for the development of many model-based control systems. This work represents a contribution for dynamic modeling of multibody mechanical systems by developing a novel recursive modular methodology that unifies the main contributions of several Classical Mechanics formalisms. The reason for proposing such a methodology is to motivate the implementation of computational routines for modeling complex multibody mechanical systems without being dependent on closed source software and, consequently, to contribute for the teaching of Multibody System Dynamics in undergraduate and graduate levels. All the theoretical developments are based on and motivated by a critical literature review, leading to a general matrix form of the dynamic equations of motion of a multibody mechanical system (that can be expressed in terms of any set of variables adopted for the description of motions performed by the system, even if such a set includes redundant variables) and to a general recursive methodology for obtaining mathematical models of complex systems given a set of equations describing the dynamics of each of its uncoupled subsystems and another set describing the constraints among these subsystems in the assembled system. This work also includes some discussions on the description of motion (using any possible set of motion variables and admitting any kind of constraint that can be expressed by an invariant), and on the conditions for solving forward and inverse dynamics problems given a mathematical model of a multibody system. Finally, some examples of computational packages based on the novel methodology, along with some case studies, are presented, highlighting the contributions that can be achieved by using the proposed methodology.
Resumo:
El objetivo del trabajo consiste en reutilizar el Treebank de dependencias EPECDEP (BDT) para construir el gold standard de la sintaxis superficial del euskera. El paso básico consiste en el estudio comparativo de los dos formalismos aplicados sobre el mismo corpus: el formalismo de la Gramática de Restricciones (Constraint Grammar, CG) y la Gramática de Dependencias (Dependency Grammar, DP). Como resultado de dicho estudio hemos establecido los criterios lingüísticos necesarios para derivar la funciones sintácticas en estilo CG. Dichos criterios han sido implementados y evaluados, así en el 75% de los casos se derivan automáticamente las funciones sintácticas para construir el gold standard.
Resumo:
The paper presents a computational system based upon formal principles to run spatial models for environmental processes. The simulator is named SimuMap because it is typically used to simulate spatial processes over a mapped representation of terrain. A model is formally represented in SimuMap as a set of coupled sub-models. The paper considers the situation where spatial processes operate at different time levels, but are still integrated. An example of such a situation commonly occurs in watershed hydrology where overland flow and stream channel flow have very different flow rates but are highly related as they are subject to the same terrain runoff processes. SimuMap is able to run a network of sub-models that express different time-space derivatives for water flow processes. Sub-models may be coded generically with a map algebra programming language that uses a surface data model. To address the problem of differing time levels in simulation, the paper: (i) reviews general approaches for numerical solvers, (ii) considers the constraints that need to be enforced to use more adaptive time steps in discrete time specified simulations, and (iii) scaling transfer rates in equations that use different time bases for time-space derivatives. A multistep scheme is proposed for SimuMap. This is presented along with a description of its visual programming interface, its modelling formalisms and future plans. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Pyrin domain (PYD)-containing proteins are key components of pathways that regulate inflammation, apoptosis, and cytokine processing. Their importance is further evidenced by the consequences of mutations in these proteins that give rise to autoimmune and hyperinflammatory syndromes. PYDs, like other members of the death domain ( DD) superfamily, are postulated to mediate homotypic interactions that assemble and regulate the activity of signaling complexes. However, PYDs are presently the least well characterized of all four DD subfamilies. Here we report the three-dimensional structure and dynamic properties of ASC2, a PYD-only protein that functions as a modulator of multidomain PYD-containing proteins involved in NF-KB and caspase-1 activation. ASC2 adopts a six-helix bundle structure with a prominent loop, comprising 13 amino acid residues, between helices two and three. This loop represents a divergent feature of PYDs from other domains with the DD fold. Detailed analysis of backbone N-15 NMR relaxation data using both the Lipari-Szabo model-free and reduced spectral density function formalisms revealed no evidence of contiguous stretches of polypeptide chain with dramatically increased internal motion, except at the extreme N and C termini. Some mobility in the fast, picosecond to nanosecond timescale, was seen in helix 3 and the preceding alpha 2-alpha 3 loop, in stark contrast to the complete disorder seen in the corresponding region of the NALP1 PYD. Our results suggest that extensive conformational flexibility in helix 3 and the alpha 2-alpha 3 loop is not a general feature of pyrin domains. Further, a transition from complete disorder to order of the alpha 2-alpha 3 loop upon binding, as suggested for NALP1, is unlikely to be a common attribute of pyrin domain interactions.