882 resultados para Predicted Distribution Data


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Amazonian lowlands include large patches of open vegetation which contrast sharply with the rainforest, and the origin of these patches has been debated. This study focuses on a large area of open vegetation in northern Brazil, where d13C and, in some instances, C/N analyses of the organic matter preserved in late Quaternary sediments were used to achieve floristic reconstructions over time. The main goal was to determine when the modern open vegetation started to develop in this area. The variability in d13C data derived from nine cores ranges from -32.2 to -19.6 parts per thousand, but with nearly 60% of data above -26.5 parts per thousand. The most enriched values were detected only in ecotone and open vegetated areas. The development of open vegetation communities was asynchronous, varying between estimated ages of 6400 and 3000 cal a BP. This suggests that the origin of the studied patches of open vegetation might be linked to sedimentary dynamics of a late Quaternary megafan system. As sedimentation ended, this vegetation type became established over the megafan surface. In addition, the data presented here show that the presence of C4 plants must be used carefully as a proxy to interpret dry paleoclimatic episodes in Amazonian areas. Copyright (c) 2012 John Wiley & Sons, Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The beta-Birnbaum-Saunders (Cordeiro and Lemonte, 2011) and Birnbaum-Saunders (Birnbaum and Saunders, 1969a) distributions have been used quite effectively to model failure times for materials subject to fatigue and lifetime data. We define the log-beta-Birnbaum-Saunders distribution by the logarithm of the beta-Birnbaum-Saunders distribution. Explicit expressions for its generating function and moments are derived. We propose a new log-beta-Birnbaum-Saunders regression model that can be applied to censored data and be used more effectively in survival analysis. We obtain the maximum likelihood estimates of the model parameters for censored data and investigate influence diagnostics. The new location-scale regression model is modified for the possibility that long-term survivors may be presented in the data. Its usefulness is illustrated by means of two real data sets. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This research reports liquid liquid equilibrium data for the system lard (swine fat), cis-9-octadecenoic acid (oleic acid), ethanol, and water at 318.2 K, as well as their correlation with the nonrandom two-liquid (NRTL) and universal quasichemical activity coefficient (UNIQUAC) thermodynamic equations, which have provided global deviations of 0.41 % and 0.53 %, respectively. Additional equilibrium experiments were also performed to obtain cholesterol partition (or distribution) coefficients to verify the availability of the use of ethanol plus water to reduce the cholesterol content in lard. The partition experiments were performed with concentrations of free fatty acids (commercial oleic acid) that varied from (0 to 20) mass % and of water in the solvent that varied from (0 to 18) mass %. The percentage of free fatty acids initially present in lard had a slight effect on the distribution of cholesterol between the phases. Furthermore, the distribution coefficients decreased by adding water in the ethanol; specifically, it resulted in a diminution of the capability of the solvent to remove the cholesterol.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Persistent organic pollutants (POPs) is a group of chemicals that are toxic, undergo long-range transport and accumulate in biota. Due to their persistency the distribution and recirculation in the environment often continues for a long period of time. Thereby they appear virtually everywhere within the biosphere, and poses a toxic stress to living organisms. In this thesis, attempts are made to contribute to the understanding of factors that influence the distribution of POPs with focus on processes in the marine environment. The bioavailability and the spatial distribution are central topics for the environmental risk management of POPs. In order to study these topics, various field studies were undertaken. To determine the bioavailable fraction of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs), polychlorinated naphthalenes (PCNs), and polychlorinated biphenyls (PCBs) the aqueous dissolved phase were sampled and analysed. In the same samples, we also measured how much of these POPs were associated with suspended particles. Different models, which predicted the phase distribution of these POPs, were then evaluated. It was found that important water characteristics, which influenced the solid-water phase distribution of POPs, were particulate organic matter (POM), particulate soot (PSC), and dissolved organic matter (DOM). The bioavailable dissolved POP-phase in the water was lower when these sorbing phases were present. Furthermore, sediments were sampled and the spatial distribution of the POPs was examined. The results showed that the concentration of PCDD/Fs, and PCNs were better described using PSC- than using POM-content of the sediment. In parallel with these field studies, we synthesized knowledge of the processes affecting the distribution of POPs in a multimedia mass balance model. This model predicted concentrations of PCDD/Fs throughout our study area, the Grenlandsfjords in Norway, within factors of ten. This makes the model capable to validate the effect of suitable remedial actions in order to decrease the exposure of these POPs to biota in the Grenlandsfjords which was the aim of the project. Also, to evaluate the influence of eutrophication on the marine occurrence PCB data from the US Musselwatch and Benthic Surveillance Programs are examined in this thesis. The dry weight based concentrations of PCB in bivalves were found to correlate positively to the organic matter content of nearby sediments, and organic matter based concentrations of PCB in sediments were negatively correlated to the organic matter content of the sediment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Ecosystem Approach to Fisheries represents the most recent research line in the international context, showing interest both towards the whole community and toward the identification and protection of all the “critical habitats” in which marine resources complete their life cycles. Using data coming from trawl surveys performed in the Northern and Central Adriatic from 1996 to 2010, this study provides the first attempt to appraise the status of the whole demersal community. It took into account not only fishery target species but also by-catch and discharge species by the use of a suite of biological indicators both at population and multi-specific level, allowing to have a global picture of the status of the demersal system. This study underlined the decline of extremely important species for the Adriatic fishery in recent years; adverse impact on catches is expected for these species in the coming years, since also minimum values of recruits recently were recorded. Both the excessive exploitation and environmental factors affected availability of resources. Moreover both distribution and nursery areas of the most important resources were pinpointed by means of geostatistical methods. The geospatial analysis also confirmed the presence of relevant recruitment areas in the North and Central Adriatic for several commercial species, as reported in the literature. The morphological and oceanographic features, the relevant rivers inflow together with the mosaic pattern of biocenoses with different food availability affected the location of the observed relevant nursery areas.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Data Distribution Management (DDM) is a core part of High Level Architecture standard, as its goal is to optimize the resources used by simulation environments to exchange data. It has to filter and match the set of information generated during a simulation, so that each federate, that is a simulation entity, only receives the information it needs. It is important that this is done quickly and to the best in order to get better performances and avoiding the transmission of irrelevant data, otherwise network resources may saturate quickly. The main topic of this thesis is the implementation of a super partes DDM testbed. It evaluates the goodness of DDM approaches, of all kinds. In fact it supports both region and grid based approaches, and it may support other different methods still unknown too. It uses three factors to rank them: execution time, memory and distance from the optimal solution. A prearranged set of instances is already available, but we also allow the creation of instances with user-provided parameters. This is how this thesis is structured. We start introducing what DDM and HLA are and what do they do in details. Then in the first chapter we describe the state of the art, providing an overview of the most well known resolution approaches and the pseudocode of the most interesting ones. The third chapter describes how the testbed we implemented is structured. In the fourth chapter we expose and compare the results we got from the execution of four approaches we have implemented. The result of the work described in this thesis can be downloaded on sourceforge using the following link: https://sourceforge.net/projects/ddmtestbed/. It is licensed under the GNU General Public License version 3.0 (GPLv3).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Il Data Distribution Management (DDM) è un componente dello standard High Level Architecture. Il suo compito è quello di rilevare le sovrapposizioni tra update e subscription extent in modo efficiente. All'interno di questa tesi si discute la necessità di avere un framework e per quali motivi è stato implementato. Il testing di algoritmi per un confronto equo, librerie per facilitare la realizzazione di algoritmi, automatizzazione della fase di compilazione, sono motivi che sono stati fondamentali per iniziare la realizzazione framework. Il motivo portante è stato che esplorando articoli scientifici sul DDM e sui vari algoritmi si è notato che in ogni articolo si creavano dei dati appositi per fare dei test. L'obiettivo di questo framework è anche quello di riuscire a confrontare gli algoritmi con un insieme di dati coerente. Si è deciso di testare il framework sul Cloud per avere un confronto più affidabile tra esecuzioni di utenti diversi. Si sono presi in considerazione due dei servizi più utilizzati: Amazon AWS EC2 e Google App Engine. Sono stati mostrati i vantaggi e gli svantaggi dell'uno e dell'altro e il motivo per cui si è scelto di utilizzare Google App Engine. Si sono sviluppati quattro algoritmi: Brute Force, Binary Partition, Improved Sort, Interval Tree Matching. Sono stati svolti dei test sul tempo di esecuzione e sulla memoria di picco utilizzata. Dai risultati si evince che l'Interval Tree Matching e l'Improved Sort sono i più efficienti. Tutti i test sono stati svolti sulle versioni sequenziali degli algoritmi e che quindi ci può essere un riduzione nel tempo di esecuzione per l'algoritmo Interval Tree Matching.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

To characterize the zonal distribution of three-dimensional (3D) T1 mapping in the hip joint of asymptomatic adult volunteers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed modesl and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated marginal residual vector by the Cholesky decomposition of the inverse of the estimated marginal variance matrix. Linear functions or the resulting "rotated" residuals are used to construct an empirical cumulative distribution function (ECDF), whose stochastic limit is characterized. We describe a resampling technique that serves as a computationally efficient parametric bootstrap for generating representatives of the stochastic limit of the ECDF. Through functionals, such representatives are used to construct global tests for the hypothesis of normal margional errors. In addition, we demonstrate that the ECDF of the predicted random effects, as described by Lange and Ryan (1989), can be formulated as a special case of our approach. Thus, our method supports both omnibus and directed tests. Our method works well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series).