969 resultados para Probability Density Function
Resumo:
Habitat selection has been one of the main research topics in ecology for decades. Nevertheless, many aspects of habitat selection still need to be explored. In particular, previous studies have overlooked the importance of temporal variation in habitat selection and the value of including data on reproductive success in order to describe the best quality habitat for a species. We used data collected from radiocollared wolves in Yellowstone National Park (USA), between 1996 and 2008, to describe wolf habitat selection. In particular, we aimed to identify i) seasonal differences in wolf habitat selection, ii) factors influencing interannual variation in habitat selection, and iii) the effect of habitat selection on wolf reproductive success. We used probability density functions to describe wolf habitat use and habitat coverages to represent the habitat available to wolves. We used regression analysis to connect habitat use with habitat characteristics and habitat selection with reproductive success. Our most relevant result was discovering strong interannual variability in wolf habitat selection. This variability was in part explained by pack identity and differences in litter size and leadership of a pack between two years (summer) and in pack size and precipitation (winter). We also detected some seasonal differences. Wolves selected open habitats, intermediate elevations, intermediate distances from roads, and avoided steep slopes in late winter. They selected areas close to roads and avoided steep slopes in summer. In early winter, wolves selected wetlands, herbaceous and shrub vegetation types, and areas at intermediate elevation and distance from roads. Surprisingly, the habitat characteristics selected by wolves were not useful in predicting reproductive success. We hypothesize that interannual variability in wolf habitat selection may be too strong to detect effects on reproductive success. Moreover, prey availability and competitor pressure may also have an influence on wolf reproductive success, which we did not assess. This project demonstrated how important temporal variation is in shaping patterns of habitat selection. We still believe in the value of running long-term studies, but the effect of temporal variation should always be taken into account.
Resumo:
In a first step to obtain a proxy record of past climatic events (including the El Ni (n) over tildeo-Southern Oscillation) in the normally aseasonal tropical environment of Sabah, a radial segment from a recently fallen dipterocarp (Shorea Superba) was radiocarbon dated and subjected to carbon isotope analysis. The high-precision radiocarbon results fell into the ambiguous modern plateau where several calibrated dates can exist for each sample. Dating was achieved by wiggle matching using a Bayesian approach to calibration. Using the defined growth characteristics of Shorea superba, probability density distributions were calculated and improbable dates rejected. It was found that the tree most likely started growing around AD 1660-1685. A total of 173 apparent growth increments were measured and, therefore, it could be determined that the tree formed one ring approximately every two years. Stable carbon isotope values were obtained from resin-extracted wholewood from each ring. Carbon cycling is evident in the `juvenile effect', resulting from the assimilation of respired carbon dioxide and lower light levels below the canopy, and in the `anthropogenic effect' caused by increased industrial activity in the late-nineteenth and twentieth centuries. This study demonstrates that palaeoenvironmental information can be obtained from trees growing in aseasonal environments, where climatic conditions prevent the formation of well-defined annual rings.
Resumo:
In this work we will present a model that describes how the number of healthy and unhealthy subjects that belong to a cohort, changes through time when there are occurrences of health promotion campaigns aiming to change the undesirable behavior. This model also includes immigration and emigration components for each group and a component taking into account when a subject that used to perform a healthy behavior changes to perform the unhealthy behavior. We will express the model in terms of a bivariate probability generating function and in addition we will simulate the model. ^ An illustrative example on how to apply the model to the promotion of condom use among adolescents will be created and we will use it to compare the results obtained from the simulations and the results obtained by the probability generating function. ^
Resumo:
En este trabajo se desarrolló un modelo probabilístico que utiliza la teoría de la función de densidad de probabilidades derivada para estimar la carga media anual de nitratos transportada por el escurrimiento superficial, utilizando una relación funcional entre el escurrimiento y la carga de nitratos. El modelo determinístico hidrológico y de calidad de agua denominado Simulator for Water Resources in Rural Basins - Water Quality (SWRRB-WQ) fue utilizado para estimar la carga de nitratos en el escurrimiento superficial. Este modelo emplea como variable de entrada la precipitación diaria observada en la Estación del Aeropuerto de Olavarría durante el período 1988 a 2002. Para la calibración del modelo se aplicó una nueva metodología que estima la incertidumbre en los valores observados. Ambos modelos probabilístico y determinístico se aplican en una subcuenca rural del arroyo Tapalqué (provincia de Buenos Aires, Argentina) y finalmente se comparan los valores de la carga de nitratos estimados con los dos modelos con las observaciones realizadas en la sección del arroyo motivo de este estudio. Los resultados muestran que la carga media de nitratos obtenida con el modelo probabilístico es del mismo orden de magnitud que los valores medios observados y estimados con el modelo hidrológico y de calidad de agua SWRRB-WQ.
Resumo:
Botanical data are widely used as terrestrial proxy data for climate reconstructions. Using a newly established method based on probability density functions (pdf-method), the temperature development throughout the last interglacial, the Eemian, is reconstructed for the two German sites Bispingen and Grobern and the French site La Grande Pile. The results are compared with previous reconstructions using other methods. After a steep increase in January as well as July temperatures in the early phase of the interglacial, the reconstructed most probable climate appears to be slightly warmer than today. While the temperature is reconstructed as relatively stable throughout the Eemian, a certain tendency towards cooler January temperatures is evident. January temperatures decreased from approx. 2-3° C in the early part to approx. -3° C in the later part at Bispingen, and from approx. 2° C to approx. -1° C at Grobern and La Grande Pile. A major drop to about -8° C marks the very end of the interglacial at all three sites. While these results agree well with other proxy data and former reconstructions based on the indicator species method, the results differ significantly from reconstructions based on the modern pollen analogue technique ("pollen transfer functions"). The lack of modern analogues is assumed to be the main reason for the discrepancies. It is concluded that any reconstruction method needs to be evaluated carefully in this respect if used for periods lacking modern analogous plant communities.
Resumo:
Learning the structure of a graphical model from data is a common task in a wide range of practical applications. In this paper, we focus on Gaussian Bayesian networks, i.e., on continuous data and directed acyclic graphs with a joint probability density of all variables given by a Gaussian. We propose to work in an equivalence class search space, specifically using the k-greedy equivalence search algorithm. This, combined with regularization techniques to guide the structure search, can learn sparse networks close to the one that generated the data. We provide results on some synthetic networks and on modeling the gene network of the two biological pathways regulating the biosynthesis of isoprenoids for the Arabidopsis thaliana plant
Resumo:
This paper presents some of the results of a method to determine the main reliability functions of concentrator solar cells. High concentrator GaAs single junction solar cells have been tested in an Accelerated Life Test. The method can be directly applied to multi-junction solar cells. The main conclusions of this test carried out show that these solar cells are robust devices with a very low probability of failure caused by degradation during their operation life (more than 30 years). The evaluation of the probability operation function (i.e. the reliability function R(t)) is obtained for two nominal operation conditions of these cells, namely simulated concentration ratios of 700 and 1050 suns. Preliminary determination of the Mean Time to Failure indicates a value much higher than the intended operation life time of the concentrator cells.
Resumo:
Abstract This paper describes a two-part methodology for managing the risk posed by water supply variability to irrigated agriculture. First, an econometric model is used to explain the variation in the production value of irrigated agriculture. The explanatory variables include an index of irrigation water availability (surface storage levels), a price index representative of the crops grown in each geographical unit, and a time variable. The model corrects for autocorrelation and it is applied to 16 representative Spanish provinces in terms of irrigated agriculture. In the second part, the fitted models are used for the economic evaluation of drought risk. In flow variability in the hydrological system servicing each province is used to perform ex-ante evaluations of economic output for the upcoming irrigation season. The model?s error and the probability distribution functions (PDFs) of the reservoirs? storage variations are used to generate Monte Carlo (Latin Hypercube) simulations of agricultural output 7 and 3 months prior to the irrigation season. The results of these simulations illustrate the different risk profiles of each management unit, which depend on farm productivity and on the probability distribution function of water in flow to reservoirs. The potential for ex-ante drought impact assessments is demonstrated. By complementing hydrological models, this method can assist water managers and decisionmakers in managing reservoirs.
Resumo:
In this paper we present a global overview of the recent study carried out in Spain for the new hazard map, which final goal is the revision of the Building Code in our country (NCSE-02). The study was carried our for a working group joining experts from The Instituto Geografico Nacional (IGN) and the Technical University of Madrid (UPM) , being the different phases of the work supervised by an expert Committee integrated by national experts from public institutions involved in subject of seismic hazard. The PSHA method (Probabilistic Seismic Hazard Assessment) has been followed, quantifying the epistemic uncertainties through a logic tree and the aleatory ones linked to variability of parameters by means of probability density functions and Monte Carlo simulations. In a first phase, the inputs have been prepared, which essentially are: 1) a project catalogue update and homogenization at Mw 2) proposal of zoning models and source characterization 3) calibration of Ground Motion Prediction Equations (GMPE’s) with actual data and development of a local model with data collected in Spain for Mw < 5.5. In a second phase, a sensitivity analysis of the different input options on hazard results has been carried out in order to have criteria for defining the branches of the logic tree and their weights. Finally, the hazard estimation was done with the logic tree shown in figure 1, including nodes for quantifying uncertainties corresponding to: 1) method for estimation of hazard (zoning and zoneless); 2) zoning models, 3) GMPE combinations used and 4) regression method for estimation of source parameters. In addition, the aleatory uncertainties corresponding to the magnitude of the events, recurrence parameters and maximum magnitude for each zone have been also considered including probability density functions and Monte Carlo simulations The main conclusions of the study are presented here, together with the obtained results in terms of PGA and other spectral accelerations SA (T) for return periods of 475, 975 and 2475 years. The map of the coefficient of variation (COV) are also represented to give an idea of the zones where the dispersion among results are the highest and the zones where the results are robust.
Resumo:
Non-parametric belief propagation (NBP) is a well-known message passing method for cooperative localization in wireless networks. However, due to the over-counting problem in the networks with loops, NBP’s convergence is not guaranteed, and its estimates are typically less accurate. One solution for this problem is non-parametric generalized belief propagation based on junction tree. However, this method is intractable in large-scale networks due to the high-complexity of the junction tree formation, and the high-dimensionality of the particles. Therefore, in this article, we propose the non-parametric generalized belief propagation based on pseudo-junction tree (NGBP-PJT). The main difference comparing with the standard method is the formation of pseudo-junction tree, which represents the approximated junction tree based on thin graph. In addition, in order to decrease the number of high-dimensional particles, we use more informative importance density function, and reduce the dimensionality of the messages. As by-product, we also propose NBP based on thin graph (NBP-TG), a cheaper variant of NBP, which runs on the same graph as NGBP-PJT. According to our simulation and experimental results, NGBP-PJT method outperforms NBP and NBP-TG in terms of accuracy, computational, and communication cost in reasonably sized networks.
Resumo:
Blue whiting (Micromesistius poutassou, http://www.marinespecies.org/aphia.php?p=taxdetails&id=126439) is a small mesopelagic planktivorous gadoid found throughout the North-East Atlantic. This data contains the results of a model-based analysis of larvae captured by the Continuous Plankton Recorder (CPR) during the period 1951-2005. The observations are analysed using Generalised Additive Models (GAMs) of the the spatial, seasonal and interannual variation in the occurrence of larvae. The best fitting model is chosen using the Aikaike Information Criteria (AIC). The probability of occurrence in the continous plankton recorder is then normalised and converted to a probability distribution function in space (UTM projection Zone 28) and season (day of year). The best fitting model splits the distribution into two separate spawning grounds north and south of a dividing line at 53 N. The probability distribution is therefore normalised in these two regions (ie the space-time integral over each of the two regions is 1). The modelled outputs are on a UTM Zone 28 grid: however, for convenience, the latitude ("lat") and longitude ("lon") of each of these grid points are also included as a variable in the NetCDF file. The assignment of each grid point to either the Northern or Southern component (defined here as north/south of 53 N), is also included as a further variable ("component"). Finally, the day of year ("doy") is stored as the number of days elapsed from and included January 1 (ie doy=1 on January 1) - the year is thereafter divided into 180 grid points.
Resumo:
Highly localized positive-energy states of the free Dirac electron are constructed and shown to evolve in a simple way under the action of Dirac's equation. When the initial uncertainty in position is small on the scale of the Compton wavelength, there is an associated uncertainty in the mean energy that is large compared with the rest mass of the electron. However, this does not lead to any breakdown of the one-particle description, associated with the possibility of pair-production, but rather leads to a rapid expansion of the probability density outwards from the point of localization, at speeds close to the speed of light.
Resumo:
Pyrin domain (PYD)-containing proteins are key components of pathways that regulate inflammation, apoptosis, and cytokine processing. Their importance is further evidenced by the consequences of mutations in these proteins that give rise to autoimmune and hyperinflammatory syndromes. PYDs, like other members of the death domain ( DD) superfamily, are postulated to mediate homotypic interactions that assemble and regulate the activity of signaling complexes. However, PYDs are presently the least well characterized of all four DD subfamilies. Here we report the three-dimensional structure and dynamic properties of ASC2, a PYD-only protein that functions as a modulator of multidomain PYD-containing proteins involved in NF-KB and caspase-1 activation. ASC2 adopts a six-helix bundle structure with a prominent loop, comprising 13 amino acid residues, between helices two and three. This loop represents a divergent feature of PYDs from other domains with the DD fold. Detailed analysis of backbone N-15 NMR relaxation data using both the Lipari-Szabo model-free and reduced spectral density function formalisms revealed no evidence of contiguous stretches of polypeptide chain with dramatically increased internal motion, except at the extreme N and C termini. Some mobility in the fast, picosecond to nanosecond timescale, was seen in helix 3 and the preceding alpha 2-alpha 3 loop, in stark contrast to the complete disorder seen in the corresponding region of the NALP1 PYD. Our results suggest that extensive conformational flexibility in helix 3 and the alpha 2-alpha 3 loop is not a general feature of pyrin domains. Further, a transition from complete disorder to order of the alpha 2-alpha 3 loop upon binding, as suggested for NALP1, is unlikely to be a common attribute of pyrin domain interactions.
Resumo:
Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper we introduce a form of non-linear latent variable model called the Generative Topographic Mapping, for which the parameters of the model can be determined using the EM algorithm. GTM provides a principled alternative to the widely used Self-Organizing Map (SOM) of Kohonen (1982), and overcomes most of the significant limitations of the SOM. We demonstrate the performance of the GTM algorithm on a toy problem and on simulated data from flow diagnostics for a multi-phase oil pipeline.
Resumo:
The Self-Organizing Map (SOM) algorithm has been extensively studied and has been applied with considerable success to a wide variety of problems. However, the algorithm is derived from heuristic ideas and this leads to a number of significant limitations. In this paper, we consider the problem of modelling the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. We introduce a novel form of latent variable model, which we call the GTM algorithm (for Generative Topographic Mapping), which allows general non-linear transformations from latent space to data space, and which is trained using the EM (expectation-maximization) algorithm. Our approach overcomes the limitations of the SOM, while introducing no significant disadvantages. We demonstrate the performance of the GTM algorithm on simulated data from flow diagnostics for a multi-phase oil pipeline.