884 resultados para Data Streams Distribution


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Abstract Rain gardens are an important tool in reducing the amount of stormwater runoff and accompanying pollutants from entering the city’s streams and lakes, and reducing their water quality. This thesis project analyzed the number of rain gardens installed through the City of Lincoln Nebraska Watershed Management’s Rain Garden Water Quality Project in distance intervals of one-eighth mile from streams and lakes. This data shows the distribution of these rain gardens in relation to streams and lakes and attempts to determine if proximity to streams and lakes is a factor in homeowners installing rain gardens. ArcGIS was used to create a map with layers to determine the number of houses with rain gardens in 1/8 mile distance increments from the city’s streams and lakes and their distances from a stream or lake. The total area, number of house parcels, and the type and location of each parcel type were also determined for comparison between the distance interval increments. The study revealed that fifty-eight percent of rain gardens were installed within a quarter mile of a stream or lake (an area covering 60% of the city and including 58.5% of the city’s house parcels), and that eighty percent of rain gardens were installed within three-eighth mile of streams or lakes (an area covering 75% of the city and 78.5% of the city’s house parcels). All parcels in the city are within 1 mile of a stream or lake. Alone the number of project houses per distance intervals suggested that proximity to a stream or lake was a factor in people’s decisions to install rain gardens. However, when compared to the number of house parcels available, proximity disappears as a factor in project participation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Amazonian lowlands include large patches of open vegetation which contrast sharply with the rainforest, and the origin of these patches has been debated. This study focuses on a large area of open vegetation in northern Brazil, where d13C and, in some instances, C/N analyses of the organic matter preserved in late Quaternary sediments were used to achieve floristic reconstructions over time. The main goal was to determine when the modern open vegetation started to develop in this area. The variability in d13C data derived from nine cores ranges from -32.2 to -19.6 parts per thousand, but with nearly 60% of data above -26.5 parts per thousand. The most enriched values were detected only in ecotone and open vegetated areas. The development of open vegetation communities was asynchronous, varying between estimated ages of 6400 and 3000 cal a BP. This suggests that the origin of the studied patches of open vegetation might be linked to sedimentary dynamics of a late Quaternary megafan system. As sedimentation ended, this vegetation type became established over the megafan surface. In addition, the data presented here show that the presence of C4 plants must be used carefully as a proxy to interpret dry paleoclimatic episodes in Amazonian areas. Copyright (c) 2012 John Wiley & Sons, Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The beta-Birnbaum-Saunders (Cordeiro and Lemonte, 2011) and Birnbaum-Saunders (Birnbaum and Saunders, 1969a) distributions have been used quite effectively to model failure times for materials subject to fatigue and lifetime data. We define the log-beta-Birnbaum-Saunders distribution by the logarithm of the beta-Birnbaum-Saunders distribution. Explicit expressions for its generating function and moments are derived. We propose a new log-beta-Birnbaum-Saunders regression model that can be applied to censored data and be used more effectively in survival analysis. We obtain the maximum likelihood estimates of the model parameters for censored data and investigate influence diagnostics. The new location-scale regression model is modified for the possibility that long-term survivors may be presented in the data. Its usefulness is illustrated by means of two real data sets. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This research reports liquid liquid equilibrium data for the system lard (swine fat), cis-9-octadecenoic acid (oleic acid), ethanol, and water at 318.2 K, as well as their correlation with the nonrandom two-liquid (NRTL) and universal quasichemical activity coefficient (UNIQUAC) thermodynamic equations, which have provided global deviations of 0.41 % and 0.53 %, respectively. Additional equilibrium experiments were also performed to obtain cholesterol partition (or distribution) coefficients to verify the availability of the use of ethanol plus water to reduce the cholesterol content in lard. The partition experiments were performed with concentrations of free fatty acids (commercial oleic acid) that varied from (0 to 20) mass % and of water in the solvent that varied from (0 to 18) mass %. The percentage of free fatty acids initially present in lard had a slight effect on the distribution of cholesterol between the phases. Furthermore, the distribution coefficients decreased by adding water in the ethanol; specifically, it resulted in a diminution of the capability of the solvent to remove the cholesterol.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Ecosystem Approach to Fisheries represents the most recent research line in the international context, showing interest both towards the whole community and toward the identification and protection of all the “critical habitats” in which marine resources complete their life cycles. Using data coming from trawl surveys performed in the Northern and Central Adriatic from 1996 to 2010, this study provides the first attempt to appraise the status of the whole demersal community. It took into account not only fishery target species but also by-catch and discharge species by the use of a suite of biological indicators both at population and multi-specific level, allowing to have a global picture of the status of the demersal system. This study underlined the decline of extremely important species for the Adriatic fishery in recent years; adverse impact on catches is expected for these species in the coming years, since also minimum values of recruits recently were recorded. Both the excessive exploitation and environmental factors affected availability of resources. Moreover both distribution and nursery areas of the most important resources were pinpointed by means of geostatistical methods. The geospatial analysis also confirmed the presence of relevant recruitment areas in the North and Central Adriatic for several commercial species, as reported in the literature. The morphological and oceanographic features, the relevant rivers inflow together with the mosaic pattern of biocenoses with different food availability affected the location of the observed relevant nursery areas.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Data Distribution Management (DDM) is a core part of High Level Architecture standard, as its goal is to optimize the resources used by simulation environments to exchange data. It has to filter and match the set of information generated during a simulation, so that each federate, that is a simulation entity, only receives the information it needs. It is important that this is done quickly and to the best in order to get better performances and avoiding the transmission of irrelevant data, otherwise network resources may saturate quickly. The main topic of this thesis is the implementation of a super partes DDM testbed. It evaluates the goodness of DDM approaches, of all kinds. In fact it supports both region and grid based approaches, and it may support other different methods still unknown too. It uses three factors to rank them: execution time, memory and distance from the optimal solution. A prearranged set of instances is already available, but we also allow the creation of instances with user-provided parameters. This is how this thesis is structured. We start introducing what DDM and HLA are and what do they do in details. Then in the first chapter we describe the state of the art, providing an overview of the most well known resolution approaches and the pseudocode of the most interesting ones. The third chapter describes how the testbed we implemented is structured. In the fourth chapter we expose and compare the results we got from the execution of four approaches we have implemented. The result of the work described in this thesis can be downloaded on sourceforge using the following link: https://sourceforge.net/projects/ddmtestbed/. It is licensed under the GNU General Public License version 3.0 (GPLv3).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Il Data Distribution Management (DDM) è un componente dello standard High Level Architecture. Il suo compito è quello di rilevare le sovrapposizioni tra update e subscription extent in modo efficiente. All'interno di questa tesi si discute la necessità di avere un framework e per quali motivi è stato implementato. Il testing di algoritmi per un confronto equo, librerie per facilitare la realizzazione di algoritmi, automatizzazione della fase di compilazione, sono motivi che sono stati fondamentali per iniziare la realizzazione framework. Il motivo portante è stato che esplorando articoli scientifici sul DDM e sui vari algoritmi si è notato che in ogni articolo si creavano dei dati appositi per fare dei test. L'obiettivo di questo framework è anche quello di riuscire a confrontare gli algoritmi con un insieme di dati coerente. Si è deciso di testare il framework sul Cloud per avere un confronto più affidabile tra esecuzioni di utenti diversi. Si sono presi in considerazione due dei servizi più utilizzati: Amazon AWS EC2 e Google App Engine. Sono stati mostrati i vantaggi e gli svantaggi dell'uno e dell'altro e il motivo per cui si è scelto di utilizzare Google App Engine. Si sono sviluppati quattro algoritmi: Brute Force, Binary Partition, Improved Sort, Interval Tree Matching. Sono stati svolti dei test sul tempo di esecuzione e sulla memoria di picco utilizzata. Dai risultati si evince che l'Interval Tree Matching e l'Improved Sort sono i più efficienti. Tutti i test sono stati svolti sulle versioni sequenziali degli algoritmi e che quindi ci può essere un riduzione nel tempo di esecuzione per l'algoritmo Interval Tree Matching.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

To characterize the zonal distribution of three-dimensional (3D) T1 mapping in the hip joint of asymptomatic adult volunteers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Constructing a 3D surface model from sparse-point data is a nontrivial task. Here, we report an accurate and robust approach for reconstructing a surface model of the proximal femur from sparse-point data and a dense-point distribution model (DPDM). The problem is formulated as a three-stage optimal estimation process. The first stage, affine registration, is to iteratively estimate a scale and a rigid transformation between the mean surface model of the DPDM and the sparse input points. The estimation results of the first stage are used to establish point correspondences for the second stage, statistical instantiation, which stably instantiates a surface model from the DPDM using a statistical approach. This surface model is then fed to the third stage, kernel-based deformation, which further refines the surface model. Handling outliers is achieved by consistently employing the least trimmed squares (LTS) approach with a roughly estimated outlier rate in all three stages. If an optimal value of the outlier rate is preferred, we propose a hypothesis testing procedure to automatically estimate it. We present here our validations using four experiments, which include 1 leave-one-out experiment, 2 experiment on evaluating the present approach for handling pathology, 3 experiment on evaluating the present approach for handling outliers, and 4 experiment on reconstructing surface models of seven dry cadaver femurs using clinically relevant data without noise and with noise added. Our validation results demonstrate the robust performance of the present approach in handling outliers, pathology, and noise. An average 95-percentile error of 1.7-2.3 mm was found when the present approach was used to reconstruct surface models of the cadaver femurs from sparse-point data with noise added.