965 resultados para Maximum-entropy probability density
Resumo:
Learning the structure of a graphical model from data is a common task in a wide range of practical applications. In this paper, we focus on Gaussian Bayesian networks, i.e., on continuous data and directed acyclic graphs with a joint probability density of all variables given by a Gaussian. We propose to work in an equivalence class search space, specifically using the k-greedy equivalence search algorithm. This, combined with regularization techniques to guide the structure search, can learn sparse networks close to the one that generated the data. We provide results on some synthetic networks and on modeling the gene network of the two biological pathways regulating the biosynthesis of isoprenoids for the Arabidopsis thaliana plant
Resumo:
This paper deals with the detection and tracking of an unknown number of targets using a Bayesian hierarchical model with target labels. To approximate the posterior probability density function, we develop a two-layer particle filter. One deals with track initiation, and the other with track maintenance. In addition, the parallel partition method is proposed to sample the states of the surviving targets.
Resumo:
The selection of predefined analytic grids (partitions of the numeric ranges) to represent input and output functions as histograms has been proposed as a mechanism of approximation in order to control the tradeoff between accuracy and computation times in several áreas ranging from simulation to constraint solving. In particular, the application of interval methods for probabilistic function characterization has been shown to have advantages over other methods based on the simulation of random samples. However, standard interval arithmetic has always been used for the computation steps. In this paper, we introduce an alternative approximate arithmetic aimed at controlling the cost of the interval operations. Its distinctive feature is that grids are taken into account by the operators. We apply the technique in the context of probability density functions in order to improve the accuracy of the probability estimates. Results show that this approach has advantages over existing approaches in some particular situations, although computation times tend to increase significantly when analyzing large functions.
Resumo:
With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Artificial Neural Networks still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning ANN parameters. In recent years the use of hybrid technologies, combining Artificial Neural Networks and Genetic Algorithms, has been utilized to. In this work, several ANN topologies were trained and tested using Artificial Neural Networks and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out.
Resumo:
Tropospheric scintillation can become a significant impairment in satellite communication systems, especially in those with low fade-margin. Moreover, fast amplitude fluctuations due to scintillation are even larger when rain is present on the propagation path. Few studies of scintillation during rain have been reported and the statistical characterization is still not totally clear. This paper presents experimental results on the relationship between scintillation and rain attenuation obtained from slant-path attenuation measurements at 50 GHz. The study is focused on the probability density function (PDF) of various scintillation parameters. It is shown that scintillation intensity, measured as the standard deviation of the amplitude fluctuations, increases with rain attenuation; in the range 1-10 dB this relationship can be expressed by power-law or linear equations. The PDFs of scintillation intensity conditioned to a given rain attenuation level are lognormal, while the overall long-term PDF is well fltted by a generalized extreme valué (GEV) distribution. The short-term PDFs of amplitude conditioned to a given intensity are normal, although skewness effects are observed for the strongest intensities. A procedure is given to derive numerically the overall PDF of scintillation amplitude using a combination of conditional PDFs and local statistics of rain attenuation.
Resumo:
Many existing engineering works model the statistical characteristics of the entities under study as normal distributions. These models are eventually used for decision making, requiring in practice the definition of the classification region corresponding to the desired confidence level. Surprisingly enough, however, a great amount of computer vision works using multidimensional normal models leave unspecified or fail to establish correct confidence regions due to misconceptions on the features of Gaussian functions or to wrong analogies with the unidimensional case. The resulting regions incur in deviations that can be unacceptable in high-dimensional models. Here we provide a comprehensive derivation of the optimal confidence regions for multivariate normal distributions of arbitrary dimensionality. To this end, firstly we derive the condition for region optimality of general continuous multidimensional distributions, and then we apply it to the widespread case of the normal probability density function. The obtained results are used to analyze the confidence error incurred by previous works related to vision research, showing that deviations caused by wrong regions may turn into unacceptable as dimensionality increases. To support the theoretical analysis, a quantitative example in the context of moving object detection by means of background modeling is given.
Resumo:
Several authors have analysed the changes of the probability density function of the solar radiation with different time resolutions. Some others have approached to study the significance of these changes when produced energy calculations are attempted. We have undertaken different transformations to four Spanish databases in order to clarify the interrelationship between radiation models and produced energy estimations. Our contribution is straightforward: the complexity of a solar radiation model needed for yearly energy calculations, is very low. Twelve values of monthly mean of solar radiation are enough to estimate energy with errors below 3%. Time resolutions better than hourly samples do not improve significantly the result of energy estimations.
Resumo:
Purpose: A fully three-dimensional (3D) massively parallelizable list-mode ordered-subsets expectation-maximization (LM-OSEM) reconstruction algorithm has been developed for high-resolution PET cameras. System response probabilities are calculated online from a set of parameters derived from Monte Carlo simulations. The shape of a system response for a given line of response (LOR) has been shown to be asymmetrical around the LOR. This work has been focused on the development of efficient region-search techniques to sample the system response probabilities, which are suitable for asymmetric kernel models, including elliptical Gaussian models that allow for high accuracy and high parallelization efficiency. The novel region-search scheme using variable kernel models is applied in the proposed PET reconstruction algorithm. Methods: A novel region-search technique has been used to sample the probability density function in correspondence with a small dynamic subset of the field of view that constitutes the region of response (ROR). The ROR is identified around the LOR by searching for any voxel within a dynamically calculated contour. The contour condition is currently defined as a fixed threshold over the posterior probability, and arbitrary kernel models can be applied using a numerical approach. The processing of the LORs is distributed in batches among the available computing devices, then, individual LORs are processed within different processing units. In this way, both multicore and multiple many-core processing units can be efficiently exploited. Tests have been conducted with probability models that take into account the noncolinearity, positron range, and crystal penetration effects, that produced tubes of response with varying elliptical sections whose axes were a function of the crystal's thickness and angle of incidence of the given LOR. The algorithm treats the probability model as a 3D scalar field defined within a reference system aligned with the ideal LOR. Results: This new technique provides superior image quality in terms of signal-to-noise ratio as compared with the histogram-mode method based on precomputed system matrices available for a commercial small animal scanner. Reconstruction times can be kept low with the use of multicore, many-core architectures, including multiple graphic processing units. Conclusions: A highly parallelizable LM reconstruction method has been proposed based on Monte Carlo simulations and new parallelization techniques aimed at improving the reconstruction speed and the image signal-to-noise of a given OSEM algorithm. The method has been validated using simulated and real phantoms. A special advantage of the new method is the possibility of defining dynamically the cut-off threshold over the calculated probabilities thus allowing for a direct control on the trade-off between speed and quality during the reconstruction.
Resumo:
The classical theory of intermittency developed for return maps assumes uniform density of points reinjected from the chaotic to laminar region. Though it works fine in some model systems, there exist a number of so-called pathological cases characterized by a significant deviation of main characteristics from the values predicted on the basis of the uniform distribution. Recently, we reported on how the reinjection probability density (RPD) can be generalized. Here, we extend this methodology and apply it to different dynamical systems exhibiting anomalous type-II and type-III intermittencies. Estimation of the universal RPD is based on fitting a linear function to experimental data and requires no a priori knowledge on the dynamical model behind. We provide special fitting procedure that enables robust estimation of the RPD from relatively short data sets (dozens of points). Thus, the method is applicable for a wide variety of data sets including numerical simulations and real-life experiments. Estimated RPD enables analytic evaluation of the length of the laminar phase of intermittent behaviors. We show that the method copes well with dynamical systems exhibiting significantly different statistics reported in the literature. We also derive and classify characteristic relations between the mean laminar length and main controlling parameter in perfect agreement with data provided by numerical simulations
Resumo:
In this dissertation a new numerical method for solving Fluid-Structure Interaction (FSI) problems in a Lagrangian framework is developed, where solids of different constitutive laws can suffer very large deformations and fluids are considered to be newtonian and incompressible. For that, we first introduce a meshless discretization based on local maximum-entropy interpolants. This allows to discretize a spatial domain with no need of tessellation, avoiding the mesh limitations. Later, the Stokes flow problem is studied. The Galerkin meshless method based on a max-ent scheme for this problem suffers from instabilities, and therefore stabilization techniques are discussed and analyzed. An unconditionally stable method is finally formulated based on a Douglas-Wang stabilization. Then, a Langrangian expression for fluid mechanics is derived. This allows us to establish a common framework for fluid and solid domains, such that interaction can be naturally accounted. The resulting equations are also in the need of stabilization, what is corrected with an analogous technique as for the Stokes problem. The fully Lagrangian framework for fluid/solid interaction is completed with simple point-to-point and point-to-surface contact algorithms. The method is finally validated, and some numerical examples show the potential scope of applications.
Resumo:
Podemos definir la sociedad como un sistema complejo que emerge de la cooperación y coordinación de billones de individuos y centenares de países. En este sentido no vivimos en una isla sino que estamos integrados en redes sociales que influyen en nuestro comportamiento. En esta tesis doctoral, presentamos un modelo analítico y una serie de estudios empíricos en los que analizamos distintos procesos sociales dinámicos desde una perspectiva de la teoría de redes complejas. En primer lugar, introducimos un modelo para explorar el impacto que las redes sociales en las que vivimos inmersos tienen en la actividad económica que transcurre sobre ellas, y mas concretamente en hasta qué punto la estructura de estas redes puede limitar la meritocracia de una sociedad. Como concepto contrario a meritocracia, en esta tesis, introducimos el término topocracia. Definimos un sistema como topocrático cuando la influencia o el poder y los ingresos de los individuos vienen principalmente determinados por la posición que ocupan en la red. Nuestro modelo es perfectamente meritocrático para redes completamente conectadas (todos los nodos están enlazados con el resto de nodos). Sin embargo nuestro modelo predice una transición hacia la topocracia a medida que disminuye la densidad de la red, siendo las redes poco densascomo las de la sociedad- topocráticas. En este modelo, los individuos por un lado producen y venden contenidos, pero por otro lado también distribuyen los contenidos producidos por otros individuos mediando entre comprador y vendedor. La producción y distribución de contenidos definen dos medios por los que los individuos reciben ingresos. El primero de ellos es meritocrático, ya que los individuos ingresan de acuerdo a lo que producen. Por el contrario el segundo es topocrático, ya que los individuos son compensados de acuerdo al número de cadenas mas cortas de la red que pasan a través de ellos. En esta tesis resolvemos el modelo computacional y analíticamente. Los resultados indican que un sistema es meritocrático solamente si la conectividad media de los individuos es mayor que una raíz del número de individuos que hay en el sistema. Por tanto, a la luz de nuestros resultados la estructura de la red social puede representar una limitación para la meritocracia de una sociedad. En la segunda parte de esta tesis se presentan una serie de estudios empíricos en los que se analizan datos extraídos de la red social Twitter para caracterizar y modelar el comportamiento humano. En particular, nos centramos en analizar conversaciones políticas, como las que tienen lugar durante campañas electorales. Nuestros resultados indican que la atención colectiva está distribuida de una forma muy heterogénea, con una minoría de cuentas extremadamente influyente. Además, la capacidad de los individuos para diseminar información en Twitter está limitada por la estructura y la posición que ocupan en la red de seguidores. Por tanto, de acuerdo a nuestras observaciones las redes sociales de Internet no posibilitan que la mayoría sea escuchada por la mayoría. De hecho, nuestros resultados implican que Twitter es topocrático, ya que únicamente una minoría de cuentas ubicadas en posiciones privilegiadas en la red de seguidores consiguen que sus mensajes se expandan por toda la red social. En conversaciones políticas, esta minoría de cuentas influyentes se compone principalmente de políticos y medios de comunicación. Los políticos son los mas mencionados ya que la gente les dirige y se refiere a ellos en sus tweets. Mientras que los medios de comunicación son las fuentes desde las que la gente propaga información. En un mundo en el que los datos personales quedan registrados y son cada día mas abundantes y precisos, los resultados del modelo presentado en esta tesis pueden ser usados para fomentar medidas que promuevan la meritocracia. Además, los resultados de los estudios empíricos sobre Twitter que se presentan en la segunda parte de esta tesis son de vital importancia para entender la nueva "sociedad digital" que emerge. En concreto hemos presentado resultados relevantes que caracterizan el comportamiento humano en Internet y que pueden ser usados para crear futuros modelos. Abstract Society can be defined as a complex system that emerges from the cooperation and coordination of billions of individuals and hundreds of countries. Thus, we do not live in social vacuum and the social networks in which we are embedded inevitably shapes our behavior. Here, we present an analytical model and several empirical studies in which we analyze dynamical social systems through a network science perspective. First, we introduce a model to explore how the structure of the social networks underlying society can limit the meritocracy of the economies. Conversely to meritocracy, in this work we introduce the term topocracy. We say that a system is topocratic if the compensation and power available to an individual is determined primarily by her position in a network. Our model is perfectly meritocratic for fully connected networks but becomes topocratic for sparse networks-like the ones in society. In the model, individuals produce and sell content, but also distribute the content produced by others when they belong to the shortest path connecting a buyer and a seller. The production and distribution of content defines two channels of compensation: a meritocratic channel, where individuals are compensated for the content they produce, and a topocratic channel, where individual compensation is based on the number of shortest paths that go through them in the network. We solve the model analytically and show that the distribution of payoffs is meritocratic only if the average degree of the nodes is larger than a root of the total number of nodes. Hence, in the light of our model, the sparsity and structure of networks represents a fundamental constraint to the meritocracy of societies. Next, we present several empirical studies that use data gathered from Twitter to analyze online human behavioral patterns. In particular, we focus on political conversations such as electoral campaigns. We found that the collective attention is highly heterogeneously distributed, as there is a minority of extremely influential accounts. In fact, the ability of individuals to propagate messages or ideas through the platform is constrained by the structure of the follower network underlying the social media and the position they occupy on it. Hence, although people have argued that social media can allow more voices to be heard, our results suggest that Twitter is highly topocratic, as only the minority of well positioned users are widely heard. This minority of influential accounts belong mostly to politicians and traditional media. Politicians tend to be the most mentioned, while media are the sources of information from which people propagate messages. We also propose a methodology to study and measure the emergence of political polarization from social interactions. To this end, we first propose a model to estimate opinions in which a minority of influential individuals propagate their opinions through a social network. The result of the model is an opinion probability density function. Next, we propose an index to quantify the extent to which the resulting distribution is polarized. Finally, we illustrate our methodology by applying it to Twitter data. In a world where personal data is increasingly available, the results of the analytical model introduced in this work can be used to enhance meritocracy and promote policies that help to build more meritocratic societies. Moreover, the results obtained in the latter part, where we have analyzed Twitter, are key to understand the new data-driven society that is emerging. In particular, we have presented relevant information that can be used to benchmark future models for online communication systems or can be used as empirical rules characterizing our online behavior.
Resumo:
En esta tesis presentamos una teoría adaptada a la simulación de fenómenos lentos de transporte en sistemas atomísticos. En primer lugar, desarrollamos el marco teórico para modelizar colectividades estadísticas de equilibrio. A continuación, lo adaptamos para construir modelos de colectividades estadísticas fuera de equilibrio. Esta teoría reposa sobre los principios de la mecánica estadística, en particular el principio de máxima entropía de Jaynes, utilizado tanto para sistemas en equilibrio como fuera de equilibrio, y la teoría de las aproximaciones del campo medio. Expresamos matemáticamente el problema como un principio variacional en el que maximizamos una entropía libre, en lugar de una energía libre. La formulación propuesta permite definir equivalentes atomísticos de variables macroscópicas como la temperatura y la fracción molar. De esta forma podemos considerar campos macroscópicos no uniformes. Completamos el marco teórico con reglas de cuadratura de Monte Carlo, gracias a las cuales obtenemos modelos computables. A continuación, desarrollamos el conjunto completo de ecuaciones que gobiernan procesos de transporte. Deducimos la desigualdad de disipación entrópica a partir de fuerzas y flujos termodinámicos discretos. Esta desigualdad nos permite identificar la estructura que deben cumplir los potenciales cinéticos discretos. Dichos potenciales acoplan las tasas de variación en el tiempo de las variables microscópicas con las fuerzas correspondientes. Estos potenciales cinéticos deben ser completados con una relación fenomenológica, del tipo definido por la teoría de Onsanger. Por último, aportamos validaciones numéricas. Con ellas ilustramos la capacidad de la teoría presentada para simular propiedades de equilibrio y segregación superficial en aleaciones metálicas. Primero, simulamos propiedades termodinámicas de equilibrio en el sistema atomístico. A continuación evaluamos la habilidad del modelo para reproducir procesos de transporte en sistemas complejos que duran tiempos largos con respecto a los tiempos característicos a escala atómica. ABSTRACT In this work, we formulate a theory to address simulations of slow time transport effects in atomic systems. We first develop this theoretical framework in the context of equilibrium of atomic ensembles, based on statistical mechanics. We then adapt it to model ensembles away from equilibrium. The theory stands on Jaynes' maximum entropy principle, valid for the treatment of both, systems in equilibrium and away from equilibrium and on meanfield approximation theory. It is expressed in the entropy formulation as a variational principle. We interpret atomistic equivalents of macroscopic variables such as the temperature and the molar fractions, wich are not required to be uniform, but can vary from particle to particle. We complement this theory with Monte Carlo summation rules for further approximation. In addition, we provide a framework for studying transport processes with the full set of equations driving the evolution of the system. We first derive a dissipation inequality for the entropic production involving discrete thermodynamic forces and fluxes. This discrete dissipation inequality identifies the adequate structure for discrete kinetic potentials which couple the microscopic field rates to the corresponding driving forces. Those kinetic potentials must finally be expressed as a phenomenological rule of the Onsanger Type. We present several validation cases, illustrating equilibrium properties and surface segregation of metallic alloys. We first assess the ability of a simple meanfield model to reproduce thermodynamic equilibrium properties in systems with atomic resolution. Then, we evaluate the ability of the model to reproduce a long-term transport process in complex systems.
Resumo:
El concepto tradicional de reglas de ensamblaje refleja la idea de que las especies no co-ocurren al azar sino que están restringidos en su co-ocurrencia por la competencia interespecífica o por un filtrado ambiental. En está tesis abordé la importancia de los procesos que determinan el ensamble de la comunidad en la estructuración de los Bosques Secos en el Sur del Ecuador. Este estudio se realizó en la región biogeográfica Tumbesina, donde se encuentra la mayor concentración de bosques secos tropicales bien conservados del sur de Ecuador, y que constituyen una de las áreas de endemismo más importantes del mundo. El clima se caracteriza por una estación seca que va desde mayo a diciembre y una estación lluviosa de enero a abril, su temperatura anual varía entre 20°C y 26°C y una precipitación promedio anual entre 300 y 700 mm. Mi primer tema fue orientado a evaluar si la distribución de los rasgos funcionales a nivel comunitario es compatible con la existencia de un filtro ambiental (filtrado del hábitat) o con la existencia de un proceso de limitación de la semejanza funcional impuesta por la competencia inter-específica entre 58 especies de plantas leñosas repartidas en 109 parcelas (10x50m). Para ello, se analizó la distribución de los valores de cinco rasgos funcionales (altura máxima, densidad de la madera, área foliar específica, tamaño de la hoja y de masa de la semilla), resumida mediante varios estadísticos (rango, varianza, kurtosis y la desviación estándar de la distribución de distancias funcionales a la especies más próxima) y se comparó con la distribución esperada bajo un modelo nulo con ausencia de competencia. Los resultados obtenidos apoyan que tanto el filtrado ambiental como la limitación a la semejanza afectan el ensamble de las comunidades vegetales de los bosques secos Tumbesinos. Un segundo tema fue identificar si la diversidad funcional está condicionada por los gradientes ambientales, y en concreto si disminuye en los ambientes más estresantes a causa del filtrado ambiental, y si por el contrario aumenta en los ambientes más benignos donde la competencia se vuelve más importante, teniendo en cuenta las posibles modificaciones a este patrón general a causa de las interacciones de facilitación. Para abordar este estudio analizamos tanto las variaciones en la diversidad funcional (respecto a los de los cinco rasgos funcionales empleados en el primer capítulo de la tesis) como las variaciones de diversidad filogenética a lo largo de un gradiente de estrés climático en los bosques tumbesinos, y se contrastaron frente a las diversidades esperadas bajo un modelo de ensamblaje completamente aleatorio de la comunidad. Los análisis mostraron que tan sólo la diversidad de tamaños foliares siguió el patrón de variación esperado, disminuyendo a medida que aumentó el estrés abiótico mientras que ni el resto de rasgos funcionales ni la diversidad funcional multivariada ni la diversidad filogenética mostraron una variación significativa a lo largo del gradiente ambiental. Un tercer tema fue evaluar si los procesos que organizan la estructura funcional de la comunidad operan a diferentes escalas espaciales. Para ello cartografié todos los árboles y arbustos de más de 5 cm de diámetro en una parcela de 9 Ha de bosque seco y caractericé funcionalmente todas las especies. Dicha parcela fue dividida en subparcelas de diferente tamaño, obteniéndose subparcelas a seis escalas espaciales distintas. Los resultados muestran agregación de estrategias funcionales semejantes a escalas pequeñas, lo que sugiere la existencia bien de filtros ambientales actuando a escala fina o bien de procesos competitivos que igualan la estrategia óptima a dichas escalas. Finalmente con la misma información de la parcela permanente de 9 Ha. Nos propusimos evaluar el efecto y comportamiento de las especies respecto a la organización de la diversidad taxonómica, funcional y filogenética. Para ello utilicé tres funciones sumario espaciales: ISAR- para el nivel taxonómico, IFDAR para el nivel funcional y IPSVAR para el nivel filogenética y las contrastamos frente a modelos nulos que describen la distribución espacial de las especies individuales. Los resultados mostraron que en todas las escalas espaciales consideradas para ISAR, IFDAR y IPSVAR, la mayoría de las especies se comportaron como neutras, es decir, que están rodeados por la riqueza de diversidad semejante a la esperada. Sin embargo, algunas especies aparecieron como acumuladoras de diversidad funcional y filogenética, lo que sugiere su implicación en procesos competitivos de limitación de la semejanza. Una pequeña proporción de las especies apareció como repelente de la diversidad funcional y filogenética, lo que sugiere su implicación en un proceso de filtrado de hábitat. En este estudio pone de relieve cómo el análisis de las dimensiones alternativas de la biodiversidad, como la diversidad funcional y filogenética, puede ayudarnos a entender la co-ocurrencia de especies en diversos ensambles de comunidad. Todos los resultados de este estudio aportan nuevas evidencias de los procesos de ensamblaje de la comunidad de los Bosques Estacionalmente secos y como las variables ambientales y la competencia juegan un papel importante en la estructuración de la comunidad. ABSTRACT The traditional concept of the rules assembly for species communities reflects the idea that species do not co-occur at random but are restricted in their co-occurrence by interspecific competition or an environmental filter. In this thesis, I addressed the importance of the se processes in the assembly of plant communities in the dry forests of southern Ecuador. This study was conducted in the biogeographic region of Tumbesina has the largest concentration of well-conserved tropical dry forests of southern Ecuador, and is recognized as one of the most important areas of endemism in the world. The climate is characterized by a dry season from May to December and a rainy season from January to April. The annual temperature varies between 20 ° C and 26 ° C and an average annual rainfall between 300 and 700 mm. I first assessed whether the distribution of functional traits at the level of the community is compatible with the existence of an environmental filter (imposed by habitat) or the existence of a limitation on functional similarity imposed by interspecific competition. This analysis was conducted for 58 species of woody plants spread over 109 plots of 10 x 50 m. Specifically, I compared the distribution of values of five functional traits (maximum height, wood density, specific leaf area, leaf size and mass of the seed), via selected statistical properties (range, variance, kurtosis and analyzed the standard deviation of the distribution of the closest functional species) distances and compared with a expected distribution under a null model of no competition. The results support that both environmental filtering and a limitation on trait similarity affect the assembly of plant communities in dry forests Tumbesina. My second chapter evaluated whether variation in functional diversity is conditioned by environmental gradients. In particular, I tested whether it decreases in the most stressful environments because of environmental filters, or if, on the contrary, functional diversity is greater in more benign environments where competition becomes more important (notwithstanding possible changes to this general pattern due to facilitation). To address this theme I analyzed changes in both the functional diversity (maximum height, wood density, specific leaf area, leaf size and mass of the seed) and the phylogenetic diversity, along a gradient of climatic stress in Tumbes forests. The observed patterns of variation were contrasted against the diversity expected under a completely random null model of community assembly. Only the diversity of leaf sizes followed the hypothesis decreasing in as trait variation abiotic stress increased, while the other functional traits multivariate functional diversity and phylogenetic diversity no showed significant variation along the environmental gradient. The third theme assess whether the processes that organize the functional structure of the community operate at different spatial scales. To do this I mapped all the trees and shrubs of more than 5 cm in diameter within a plot of 9 hectares of dry forest and functionally classified each species. The plot was divided into subplots of different sizes, obtaining subplots of six different spatial scales. I found aggregation of similar functional strategies at small scales, which may indicate the existence of environmental filters or competitive processes that correspond to the optimal strategy for these fine scales. Finally, with the same information from the permanent plot of 9 ha, I evaluated the effect and behavior of individual species on the organization of the taxonomic, functional and phylogenetic diversity. The analysis comprised three spatial summary functions: ISAR- for taxonomic level analysis, IFDAR for functional level analysis, and IPSVAR for phylogenetic level analysis, in each case the pattern of diversity was contrasted against null models that randomly reallocate describe the spatial distribution of individual species and their traits. For all spatial scales considering ISAR, IFDAR and IPSVAR, most species behaved as neutral, i.e. they are surrounded by the diversity of other traits similar to that expected under a null model. However, some species appeared as accumulator of functional and phylogenetic diversity, suggesting that they may play a role in competitive processes that limiting similarity. A small proportion of the species appeared as repellent of functional and phylogenetic diversity, suggesting their involvement in a process of habitat filtering. These analysis highlights that the analysis of alternative dimensions of biodiversity, such as functional and phylogenetic diversity, can help us understand the co-occurrence of species in the assembly of biotic communities. All results of this study provide further evidence of the processes of assembly of the community of the seasonally dry forests as environmental variables and competition play an important role in structuring the community.
Resumo:
Patterns in sequences of amino acid hydrophobic free energies predict secondary structures in proteins. In protein folding, matches in hydrophobic free energy statistical wavelengths appear to contribute to selective aggregation of secondary structures in “hydrophobic zippers.” In a similar setting, the use of Fourier analysis to characterize the dominant statistical wavelengths of peptide ligands’ and receptor proteins’ hydrophobic modes to predict such matches has been limited by the aliasing and end effects of short peptide lengths, as well as the broad-band, mode multiplicity of many of their frequency (power) spectra. In addition, the sequence locations of the matching modes are lost in this transformation. We make new use of three techniques to address these difficulties: (i) eigenfunction construction from the linear decomposition of the lagged covariance matrices of the ligands and receptors as hydrophobic free energy sequences; (ii) maximum entropy, complex poles power spectra, which select the dominant modes of the hydrophobic free energy sequences or their eigenfunctions; and (iii) discrete, best bases, trigonometric wavelet transformations, which confirm the dominant spectral frequencies of the eigenfunctions and locate them as (absolute valued) moduli in the peptide or receptor sequence. The leading eigenfunction of the covariance matrix of a transmembrane receptor sequence locates the same transmembrane segments seen in n-block-averaged hydropathy plots while leaving the remaining hydrophobic modes unsmoothed and available for further analyses as secondary eigenfunctions. In these receptor eigenfunctions, we find a set of statistical wavelength matches between peptide ligands and their G-protein and tyrosine kinase coupled receptors, ranging across examples from 13.10 amino acids in acid fibroblast growth factor to 2.18 residues in corticotropin releasing factor. We find that the wavelet-located receptor modes in the extracellular loops are compatible with studies of receptor chimeric exchanges and point mutations. A nonbinding corticotropin-releasing factor receptor mutant is shown to have lost the signatory mode common to the normal receptor and its ligand. Hydrophobic free energy eigenfunctions and their transformations offer new quantitative physical homologies in database searches for peptide-receptor matches.
Resumo:
We present simultaneous and continuous observations of the Hα, Hβ, He I D_3, Na I D_1,D_2 doublet and the Ca II H&K lines for the RS CVn system HR 1099. The spectroscopic observations were obtained during the MUSICOS 1998 campaign involving several observatories and instruments, both echelle and long-slit spectrographs. During this campaign, HR 1099 was observed almost continuously for more than 8 orbits of 2^d.8. Two large optical flares were observed, both showing an increase in the emission of Hα, Ca II H K, Hβ and He I D_3 and a strong filling-in of the Na I D_1, D_2 doublet. Contemporary photometric observations were carried out with the robotic telescopes APT-80 of Catania and Phoenix-25 of Fairborn Observatories. Maps of the distribution of the spotted regions on the photosphere of the binary components were derived using the Maximum Entropy and Tikhonov photometric regularization criteria. Rotational modulation was observed in Hα and He I D_3 in anti-correlation with the photometric light curves. Both flares occurred at the same binary phase (0.85), suggesting that these events took place in the same active region. Simultaneous X-ray observations, performed by ASM on board RXTE, show several flare-like events, some of which correlate well with the observed optical flares. Rotational modulation in the X-ray light curve has been detected with minimum flux when the less active G5 V star was in front. A possible periodicity in the X-ray flare-like events was also found.