993 resultados para Relational complexity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Los hipergrafos dirigidos se han empleado en problemas relacionados con lógica proposicional, bases de datos relacionales, linguística computacional y aprendizaje automático. Los hipergrafos dirigidos han sido también utilizados como alternativa a los grafos (bipartitos) dirigidos para facilitar el estudio de las interacciones entre componentes de sistemas complejos que no pueden ser fácilmente modelados usando exclusivamente relaciones binarias. En este contexto, este tipo de representación es conocida como hiper-redes. Un hipergrafo dirigido es una generalización de un grafo dirigido especialmente adecuado para la representación de relaciones de muchos a muchos. Mientras que una arista en un grafo dirigido define una relación entre dos de sus nodos, una hiperarista en un hipergrafo dirigido define una relación entre dos conjuntos de sus nodos. La conexión fuerte es una relación de equivalencia que divide el conjunto de nodos de un hipergrafo dirigido en particiones y cada partición define una clase de equivalencia conocida como componente fuertemente conexo. El estudio de los componentes fuertemente conexos de un hipergrafo dirigido puede ayudar a conseguir una mejor comprensión de la estructura de este tipo de hipergrafos cuando su tamaño es considerable. En el caso de grafo dirigidos, existen algoritmos muy eficientes para el cálculo de los componentes fuertemente conexos en grafos de gran tamaño. Gracias a estos algoritmos, se ha podido averiguar que la estructura de la WWW tiene forma de “pajarita”, donde más del 70% del los nodos están distribuidos en tres grandes conjuntos y uno de ellos es un componente fuertemente conexo. Este tipo de estructura ha sido también observada en redes complejas en otras áreas como la biología. Estudios de naturaleza similar no han podido ser realizados en hipergrafos dirigidos porque no existe algoritmos capaces de calcular los componentes fuertemente conexos de este tipo de hipergrafos. En esta tesis doctoral, hemos investigado como calcular los componentes fuertemente conexos de un hipergrafo dirigido. En concreto, hemos desarrollado dos algoritmos para este problema y hemos determinado que son correctos y cuál es su complejidad computacional. Ambos algoritmos han sido evaluados empíricamente para comparar sus tiempos de ejecución. Para la evaluación, hemos producido una selección de hipergrafos dirigidos generados de forma aleatoria inspirados en modelos muy conocidos de grafos aleatorios como Erdos-Renyi, Newman-Watts-Strogatz and Barabasi-Albert. Varias optimizaciones para ambos algoritmos han sido implementadas y analizadas en la tesis. En concreto, colapsar los componentes fuertemente conexos del grafo dirigido que se puede construir eliminando ciertas hiperaristas complejas del hipergrafo dirigido original, mejora notablemente los tiempos de ejecucion de los algoritmos para varios de los hipergrafos utilizados en la evaluación. Aparte de los ejemplos de aplicación mencionados anteriormente, los hipergrafos dirigidos han sido también empleados en el área de representación de conocimiento. En concreto, este tipo de hipergrafos se han usado para el cálculo de módulos de ontologías. Una ontología puede ser definida como un conjunto de axiomas que especifican formalmente un conjunto de símbolos y sus relaciones, mientras que un modulo puede ser entendido como un subconjunto de axiomas de la ontología que recoge todo el conocimiento que almacena la ontología sobre un conjunto especifico de símbolos y sus relaciones. En la tesis nos hemos centrado solamente en módulos que han sido calculados usando la técnica de localidad sintáctica. Debido a que las ontologías pueden ser muy grandes, el cálculo de módulos puede facilitar las tareas de re-utilización y mantenimiento de dichas ontologías. Sin embargo, analizar todos los posibles módulos de una ontología es, en general, muy costoso porque el numero de módulos crece de forma exponencial con respecto al número de símbolos y de axiomas de la ontología. Afortunadamente, los axiomas de una ontología pueden ser divididos en particiones conocidas como átomos. Cada átomo representa un conjunto máximo de axiomas que siempre aparecen juntos en un modulo. La decomposición atómica de una ontología es definida como un grafo dirigido de tal forma que cada nodo del grafo corresponde con un átomo y cada arista define una dependencia entre una pareja de átomos. En esta tesis introducimos el concepto de“axiom dependency hypergraph” que generaliza el concepto de descomposición atómica de una ontología. Un modulo en una ontología correspondería con un componente conexo en este tipo de hipergrafos y un átomo de una ontología con un componente fuertemente conexo. Hemos adaptado la implementación de nuestros algoritmos para que funcionen también con axiom dependency hypergraphs y poder de esa forma calcular los átomos de una ontología. Para demostrar la viabilidad de esta idea, hemos incorporado nuestros algoritmos en una aplicación que hemos desarrollado para la extracción de módulos y la descomposición atómica de ontologías. A la aplicación la hemos llamado HyS y hemos estudiado sus tiempos de ejecución usando una selección de ontologías muy conocidas del área biomédica, la mayoría disponibles en el portal de Internet NCBO. Los resultados de la evaluación muestran que los tiempos de ejecución de HyS son mucho mejores que las aplicaciones más rápidas conocidas. ABSTRACT Directed hypergraphs are an intuitive modelling formalism that have been used in problems related to propositional logic, relational databases, computational linguistic and machine learning. Directed hypergraphs are also presented as an alternative to directed (bipartite) graphs to facilitate the study of the interactions between components of complex systems that cannot naturally be modelled as binary relations. In this context, they are known as hyper-networks. A directed hypergraph is a generalization of a directed graph suitable for representing many-to-many relationships. While an edge in a directed graph defines a relation between two nodes of the graph, a hyperedge in a directed hypergraph defines a relation between two sets of nodes. Strong-connectivity is an equivalence relation that induces a partition of the set of nodes of a directed hypergraph into strongly-connected components. These components can be collapsed into single nodes. As result, the size of the original hypergraph can significantly be reduced if the strongly-connected components have many nodes. This approach might contribute to better understand how the nodes of a hypergraph are connected, in particular when the hypergraphs are large. In the case of directed graphs, there are efficient algorithms that can be used to compute the strongly-connected components of large graphs. For instance, it has been shown that the macroscopic structure of the World Wide Web can be represented as a “bow-tie” diagram where more than 70% of the nodes are distributed into three large sets and one of these sets is a large strongly-connected component. This particular structure has been also observed in complex networks in other fields such as, e.g., biology. Similar studies cannot be conducted in a directed hypergraph because there does not exist any algorithm for computing the strongly-connected components of the hypergraph. In this thesis, we investigate ways to compute the strongly-connected components of directed hypergraphs. We present two new algorithms and we show their correctness and computational complexity. One of these algorithms is inspired by Tarjan’s algorithm for directed graphs. The second algorithm follows a simple approach to compute the stronglyconnected components. This approach is based on the fact that two nodes of a graph that are strongly-connected can also reach the same nodes. In other words, the connected component of each node is the same. Both algorithms are empirically evaluated to compare their performances. To this end, we have produced a selection of random directed hypergraphs inspired by existent and well-known random graphs models like Erd˝os-Renyi and Newman-Watts-Strogatz. Besides the application examples that we mentioned earlier, directed hypergraphs have also been employed in the field of knowledge representation. In particular, they have been used to compute the modules of an ontology. An ontology is defined as a collection of axioms that provides a formal specification of a set of terms and their relationships; and a module is a subset of an ontology that completely captures the meaning of certain terms as defined in the ontology. In particular, we focus on the modules computed using the notion of syntactic locality. As ontologies can be very large, the computation of modules facilitates the reuse and maintenance of these ontologies. Analysing all modules of an ontology, however, is in general not feasible as the number of modules grows exponentially in the number of terms and axioms of the ontology. Nevertheless, the modules can succinctly be represented using the Atomic Decomposition of an ontology. Using this representation, an ontology can be partitioned into atoms, which are maximal sets of axioms that co-occur in every module. The Atomic Decomposition is then defined as a directed graph such that each node correspond to an atom and each edge represents a dependency relation between two atoms. In this thesis, we introduce the notion of an axiom dependency hypergraph which is a generalization of the atomic decomposition of an ontology. A module in the ontology corresponds to a connected component in the hypergraph, and the atoms of the ontology to the strongly-connected components. We apply our algorithms for directed hypergraphs to axiom dependency hypergraphs and in this manner, we compute the atoms of an ontology. To demonstrate the viability of this approach, we have implemented the algorithms in the application HyS which computes the modules of ontologies and calculate their atomic decomposition. In the thesis, we provide an experimental evaluation of HyS with a selection of large and prominent biomedical ontologies, most of which are available in the NCBO Bioportal. HyS outperforms state-of-the-art implementations in the tasks of extracting modules and computing the atomic decomposition of these ontologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE The decision-making process plays a key role in organizations. Every decision-making process produces a final choice that may or may not prompt action. Recurrently, decision makers find themselves in the dichotomous question of following a traditional sequence decision-making process where the output of a decision is used as the input of the next stage of the decision, or following a joint decision-making approach where several decisions are taken simultaneously. The implication of the decision-making process will impact different players of the organization. The choice of the decision- making approach becomes difficult to find, even with the current literature and practitioners’ knowledge. The pursuit of better ways for making decisions has been a common goal for academics and practitioners. Management scientists use different techniques and approaches to improve different types of decisions. The purpose of this decision is to use the available resources as well as possible (data and techniques) to achieve the objectives of the organization. The developing and applying of models and concepts may be helpful to solve managerial problems faced every day in different companies. As a result of this research different decision models are presented to contribute to the body of knowledge of management science. The first models are focused on the manufacturing industry and the second part of the models on the health care industry. Despite these models being case specific, they serve the purpose of exemplifying that different approaches to the problems and could provide interesting results. Unfortunately, there is no universal recipe that could be applied to all the problems. Furthermore, the same model could deliver good results with certain data and bad results for other data. A framework to analyse the data before selecting the model to be used is presented and tested in the models developed to exemplify the ideas. METHODOLOGY As the first step of the research a systematic literature review on the joint decision is presented, as are the different opinions and suggestions of different scholars. For the next stage of the thesis, the decision-making process of more than 50 companies was analysed in companies from different sectors in the production planning area at the Job Shop level. The data was obtained using surveys and face-to-face interviews. The following part of the research into the decision-making process was held in two application fields that are highly relevant for our society; manufacturing and health care. The first step was to study the interactions and develop a mathematical model for the replenishment of the car assembly where the problem of “Vehicle routing problem and Inventory” were combined. The next step was to add the scheduling or car production (car sequencing) decision and use some metaheuristics such as ant colony and genetic algorithms to measure if the behaviour is kept up with different case size problems. A similar approach is presented in a production of semiconductors and aviation parts, where a hoist has to change from one station to another to deal with the work, and a jobs schedule has to be done. However, for this problem simulation was used for experimentation. In parallel, the scheduling of operating rooms was studied. Surgeries were allocated to surgeons and the scheduling of operating rooms was analysed. The first part of the research was done in a Teaching hospital, and for the second part the interaction of uncertainty was added. Once the previous problem had been analysed a general framework to characterize the instance was built. In the final chapter a general conclusion is presented. FINDINGS AND PRACTICAL IMPLICATIONS The first part of the contributions is an update of the decision-making literature review. Also an analysis of the possible savings resulting from a change in the decision process is made. Then, the results of the survey, which present a lack of consistency between what the managers believe and the reality of the integration of their decisions. In the next stage of the thesis, a contribution to the body of knowledge of the operation research, with the joint solution of the replenishment, sequencing and inventory problem in the assembly line is made, together with a parallel work with the operating rooms scheduling where different solutions approaches are presented. In addition to the contribution of the solving methods, with the use of different techniques, the main contribution is the framework that is proposed to pre-evaluate the problem before thinking of the techniques to solve it. However, there is no straightforward answer as to whether it is better to have joint or sequential solutions. Following the proposed framework with the evaluation of factors such as the flexibility of the answer, the number of actors, and the tightness of the data, give us important hints as to the most suitable direction to take to tackle the problem. RESEARCH LIMITATIONS AND AVENUES FOR FUTURE RESEARCH In the first part of the work it was really complicated to calculate the possible savings of different projects, since in many papers these quantities are not reported or the impact is based on non-quantifiable benefits. The other issue is the confidentiality of many projects where the data cannot be presented. For the car assembly line problem more computational power would allow us to solve bigger instances. For the operation research problem there was a lack of historical data to perform a parallel analysis in the teaching hospital. In order to keep testing the decision framework it is necessary to keep applying more case studies in order to generalize the results and make them more evident and less ambiguous. The health care field offers great opportunities since despite the recent awareness of the need to improve the decision-making process there are many opportunities to improve. Another big difference with the automotive industry is that the last improvements are not spread among all the actors. Therefore, in the future this research will focus more on the collaboration between academia and the health care sector.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The role of matter has remained central to the making and the thinking of architecture, yet many attempts to capture its essence have been trapped in a dialectic tension between form and materiality, between material consistency and immaterial modes of perception, between static states and dynamic processes, between the real and the virtual, thus advancing an increasing awareness of the perplexing complexity of the material world. Within that complexity, the notion of agency – emerging from and within ecological, politico-economic and socio-cultural processes – calls for a reconceptualization of matter, and consequently processes of materialisation, offering a new understanding of context and space, approached as a field of dynamic relationships. In this context, cutting across boundaries between architectural discourse and practice as well as across the vast trans-disciplinary territory, this dissertation aims to illustrate a variety of design methodologies that have derived from the relational approach. More specifically, the intention is to offer new insights into spatial epistemologies embedded within the notion of atmosphere – commonly associated with the so-called New Phenomenology – and to reflect upon its implications for architectural production. In what follows, the intended argumentation has a twofold dimension. First, through a scrutiny of the notion of atmosphere, the aim is to explore ways of thinking and shaping reality through relations, thus acknowledging the aforementioned complexity of the material universe disclosed through human and non-human as well as material and immaterial forces. Secondly, despite the fact that concerns for atmospherics have flourished over the last few decades, the objective is to reveal that the conceptual foundations and procedures for the production of atmosphere might be found beneath the surface of contemporary debates. Hence, in order to unfold and illustrate previously advocated assumptions, an archaeological approach is adopted, tracing a particular projective genealogy, one that builds upon an atmospheric awareness. Accordingly, in tracing such an atmospheric awareness the study explores the notoriously ambiguous nature and the twofold dimension of atmosphere – meteorological and aesthetic – and the heterogeneity of meanings embedded in them. In this context, the notion of atmosphere is presented as parallactic. It transgresses the formal and material boundaries of bodies. It calls for a reevaluation of perceptual experience, opening a new gap that exposes the orthodox space-bodyenvironment relationships to questioning. It offers to architecture an expanded domain in which to manifest itself, defining architectural space as a contingent construction and field of engagement, and presenting matter as a locus of production/performance/action. Consequently, it is such an expanded or relational dimension that constitutes the foundation of what in the context of this study is to be referred to as affective tectonics. Namely, a tectonics that represents processual and experiential multiplicity of convergent time and space, body and environment, the material and the immaterial; a tectonics in which matter neither appears as an inert and passive substance, nor is limited to the traditionally regarded tectonic significance or expressive values, but is presented as an active element charged with inherent potential and vitality. By defining such a relational materialism, the intention is to expand the spectrum of material attributes revealing the intrinsic relationships between the physical properties of materials and their performative, transformative and affective capacities, including effects of interference and haptic dynamics – i.e. protocols of transmission and interaction. The expression that encapsulates its essence is: ACTIVE MATERIALITY RESUMEN El significado de la materia ha estado desde siempre ligado al pensamiento y el quehacer arquitectónico. Sin embargo, muchos intentos de capturar su esencia se han visto sumergidos en una tensión dialéctica entre la forma y la materialidad, entre la consistencia material y los modos inmateriales de la percepción, entre los estados estáticos y los procesos dinámicos, entre lo real y lo virtual, revelando una creciente conciencia de la desconcertante complejidad del mundo material. En esta complejidad, la noción de la operatividad o capacidad agencial– que emerge desde y dentro de los procesos ecológicos, políticos y socio-culturales– requiere de una reconceptualización de la materia y los procesos inherentes a la materialización, ofreciendo una nueva visión del contexto y el espacio, entendidos como un campo relacional dinámico. Oscilando entre el discurso arquitectónico y la práctica arquitectónica, y atravesando un extenso territorio trans-disciplinar, el objetivo de la presente tesis es ilustrar la variedad de metodologías proyectuales que emergieron desde este enfoque relacional. Concretamente, la intención es indagar en las epistemologías espaciales vinculadas a la noción de la atmósfera– generalmente asociada a la llamada Nueva Fenomenología–, reflexionando sobre su impacto en la producción arquitectónica. A continuación, el estudio ofrece una doble línea argumental. Primero, a través del análisis crítico de la noción de atmósfera, el objetivo es explorar maneras de pensar y dar forma a la realidad a través de las relaciones, reconociendo la mencionada complejidad del universo material revelado a través de fuerzas humanas y no-humanas, materiales e inmateriales. Segundo, a pesar de que el interés por las atmósferas ha florecido en las últimas décadas, la intención es demostrar que las bases conceptuales y los protocolos proyectuales de la creación de atmósferas se pueden encontrar bajo la superficie de los debates contemporáneos. Para corroborar e ilustrar estas hipótesis se propone una metodología de carácter arqueológico, trazando una particular genealogía de proyectos– la que se basa en una conciencia atmosférica. Asimismo, al definir esta conciencia atmosférica, el estudio explora tanto la naturaleza notoriamente ambigua y la dimensión dual de la atmósfera– meteorológica y estética–, como la heterogeneidad de significados derivados de ellas. En este contexto, la atmósfera se entiende como un concepto detonante, ya que sobrepasa los limites formales y materiales de los cuerpos, llevando a la re-evaluación de la experiencia perceptiva y abriendo a preguntas la ortodoxa relación espacio- cuerpo-ambiente. En consecuencia, la noción de la atmósfera ofrece a la arquitectura una dimensión expandida donde manifestarse, definiendo el espacio como una construcción contingente, performativa y afectiva, y presentando la materia como locus de producción/ actuación/ acción. Es precisamente esta dimensión expandida relacional la que constituye una base para lo que en el contexto del presente estudio se define como una tectónica afectiva. Es decir, una tectónica que representa una multiplicidad procesual y experiencial derivada de la convergencia entre el tiempo y el espacio, el cuerpo y el entorno, lo material y lo inmaterial; una tectónica en la que la materia no aparece como una substancia pasiva e inerte, ni es limitada al significado considerado tradicionalmente constructivo o a sus valores expresivos, sino que se presenta como elemento activo cargado de un potencial y vitalidad inherentes. A través de la definición de este tipo de materialismo afectivo, la intención es expandir el espectro de los atributos materiales, revelando las relaciones intrínsecas entre las propiedades físicas de los materiales y sus capacidades performativas, transformativas y afectivas, incluyendo efectos de interferencias y dinámicas hápticas– o dicho de otro modo, protocolos de transmisión e interacción. Una expresión que encapsula su esencia vendría a ser: MATERIALIDAD ACTIVA

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The spatial complexity of the distribution of organic matter, chemicals, nutrients, pollutants has been demonstrated to have multifractal nature (Kravchenco et al. [1]). This fact supports the possibility of existence of some emergent heterogeneity structure built under the evolution of the system. The aim of this note is providing a consistent explanation to the mentioned results via an extremely simple model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Increasingly, studies of genes and genomes are indicating that considerable horizontal transfer has occurred between prokaryotes. Extensive horizontal transfer has occurred for operational genes (those involved in housekeeping), whereas informational genes (those involved in transcription, translation, and related processes) are seldomly horizontally transferred. Through phylogenetic analysis of six complete prokaryotic genomes and the identification of 312 sets of orthologous genes present in all six genomes, we tested two theories describing the temporal flow of horizontal transfer. We show that operational genes have been horizontally transferred continuously since the divergence of the prokaryotes, rather than having been exchanged in one, or a few, massive events that occurred early in the evolution of prokaryotes. In agreement with earlier studies, we found that differences in rates of evolution between operational and informational genes are minimal, suggesting that factors other than rate of evolution are responsible for the observed differences in horizontal transfer. We propose that a major factor in the more frequent horizontal transfer of operational genes is that informational genes are typically members of large, complex systems, whereas operational genes are not, thereby making horizontal transfer of informational gene products less probable (the complexity hypothesis).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Date of Acceptance: 5/04/2015 15 pages, 4 figures

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The saliva of blood-sucking arthropods contains powerful pharmacologically active substances and may be a vaccine target against some vector-borne diseases. Subtractive cloning combined with biochemical approaches was used to discover activities in the salivary glands of the hematophagous fly Lutzomyia longipalpis. Sequences of nine full-length cDNA clones were obtained, five of which are possibly associated with blood-meal acquisition, each having cDNA similarity to: (i) the bed bug Cimex lectularius apyrase, (ii) a 5′-nucleotidase/phosphodiesterase, (iii) a hyaluronidase, (iv) a protein containing a carbohydrate-recognition domain (CRD), and (v) a RGD-containing peptide with no significant matches to known proteins in the blast databases. Following these findings, we observed that the salivary apyrase activity of L. longipalpis is indeed similar to that of Cimex apyrase in its metal requirements. The predicted isoelectric point of the putative apyrase matches the value found for Lutzomyia salivary apyrase. A 5′-nucleotidase, as well as hyaluronidase activity, was found in the salivary glands, and the CRD-containing cDNA matches the N-terminal sequence of the HPLC-purified salivary anticlotting protein. A cDNA similar to α-amylase was discovered and salivary enzymatic activity demonstrated for the first time in a blood-sucking arthropod. Full-length clones were also found coding for three proteins of unknown function matching, respectively, the N-terminal sequence of an abundant salivary protein, having similarity to the CAP superfamily of proteins and the Drosophila yellow protein. Finally, two partial sequences are reported that match possible housekeeping genes. Subtractive cloning will considerably enhance efforts to unravel the salivary pharmacopeia of blood-sucking arthropods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analysis of previously published sets of DNA microarray gene expression data by singular value decomposition has uncovered underlying patterns or “characteristic modes” in their temporal profiles. These patterns contribute unequally to the structure of the expression profiles. Moreover, the essential features of a given set of expression profiles are captured using just a small number of characteristic modes. This leads to the striking conclusion that the transcriptional response of a genome is orchestrated in a few fundamental patterns of gene expression change. These patterns are both simple and robust, dominating the alterations in expression of genes throughout the genome. Moreover, the characteristic modes of gene expression change in response to environmental perturbations are similar in such distant organisms as yeast and human cells. This analysis reveals simple regularities in the seemingly complex transcriptional transitions of diverse cells to new states, and these provide insights into the operation of the underlying genetic networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

GlycoSuiteDB is a relational database that curates information from the scientific literature on glyco­protein derived glycan structures, their biological sources, the references in which the glycan was described and the methods used to determine the glycan structure. To date, the database includes most published O-linked oligosaccharides from the last 50 years and most N-linked oligosaccharides that were published in the 1990s. For each structure, information is available concerning the glycan type, linkage and anomeric configuration, mass and composition. Detailed information is also provided on native and recombinant sources, including tissue and/or cell type, cell line, strain and disease state. Where known, the proteins to which the glycan structures are attached are reported, and cross-references to the SWISS-PROT/TrEMBL protein sequence databases are given if applicable. The GlycoSuiteDB annotations include literature references which are linked to PubMed, and detailed information on the methods used to determine each glycan structure are noted to help the user assess the quality of the structural assignment. GlycoSuiteDB has a user-friendly web interface which allows the researcher to query the database using mono­isotopic or average mass, monosaccharide composition, glycosylation linkages (e.g. N- or O-linked), reducing terminal sugar, attached protein, taxonomy, tissue or cell type and GlycoSuiteDB accession number. Advanced queries using combinations of these parameters are also possible. GlycoSuiteDB can be accessed on the web at http://www.glycosuite.com.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Genetic analysis of plant–pathogen interactions has demonstrated that resistance to infection is often determined by the interaction of dominant plant resistance (R) genes and dominant pathogen-encoded avirulence (Avr) genes. It was postulated that R genes encode receptors for Avr determinants. A large number of R genes and their cognate Avr genes have now been analyzed at the molecular level. R gene loci are extremely polymorphic, particularly in sequences encoding amino acids of the leucine-rich repeat motif. A major challenge is to determine how Avr perception by R proteins triggers the plant defense response. Mutational analysis has identified several genes required for the function of specific R proteins. Here we report the identification of Rcr3, a tomato gene required specifically for Cf-2-mediated resistance. We propose that Avr products interact with host proteins to promote disease, and that R proteins “guard” these host components and initiate Avr-dependent plant defense responses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The current phylogenetic hypothesis for the evolution and biogeography of fiddler crabs relies on the assumption that complex behavioral traits are assumed to also be evolutionary derived. Indo-west Pacific fiddler crabs have simpler reproductive social behavior and are more marine and were thought to be ancestral to the more behaviorally complex and more terrestrial American species. It was also hypothesized that the evolution of more complex social and reproductive behavior was associated with the colonization of the higher intertidal zones. Our phylogenetic analysis, based upon a set of independent molecular characters, however, demonstrates how widely entrenched ideas about evolution and biogeography led to a reasonable, but apparently incorrect, conclusion about the evolutionary trends within this pantropical group of crustaceans. Species bearing the set of "derived traits" are phylogenetically ancestral, suggesting an alternative evolutionary scenario: the evolution of reproductive behavioral complexity in fiddler crabs may have arisen multiple times during their evolution. The evolution of behavioral complexity may have arisen by coopting of a series of other adaptations for high intertidal living and antipredator escape. A calibration of rates of molecular evolution from populations on either side of the Isthmus of Panama suggest a sequence divergence rate for 16S rRNA of 0.9% per million years. The divergence between the ancestral clade and derived forms is estimated to be approximately 22 million years ago, whereas the divergence between the American and Indo-west Pacific is estimated to be approximately 17 million years ago.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size.