907 resultados para Large-scale system


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Concerns over global change and its effect on coral reef survivorship have highlighted the need for long-term datasets and proxy records, to interpret environmental trends and inform policymakers. Citizen science programs have showed to be a valid method for collecting data, reducing financial and time costs for institutions. This study is based on the elaboration of data collected by recreational divers and its main purpose is to evaluate changes in the state of coral reef biodiversity in the Red Sea over a long term period and validate the volunteer-based monitoring method. Volunteers recreational divers completed a questionnaire after each dive, recording the presence of 72 animal taxa and negative reef conditions. Comparisons were made between records from volunteers and independent records from a marine biologist who performed the same dive at the same time. A total of 500 volunteers were tested in 78 validation trials. Relative values of accuracy, reliability and similarity seem to be comparable to those performed by volunteer divers on precise transects in other projects, or in community-based terrestrial monitoring. 9301 recreational divers participated in the monitoring program, completing 23,059 survey questionnaires in a 5-year period. The volunteer-sightings-based index showed significant differences between the geographical areas. The area of Hurghada is distinguished by a medium-low biodiversity index, heavily damaged by a not controlled anthropic exploitation. Coral reefs along the Ras Mohammed National Park at Sharm el Sheikh, conversely showed high biodiversity index. The detected pattern seems to be correlated with the conservation measures adopted. In our experience and that of other research institutes, citizen science can integrate conventional methods and significantly reduce costs and time. Involving recreational divers we were able to build a large data set, covering a wide geographic area. The main limitation remains the difficulty of obtaining an homogeneous spatial sampling distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mass estimation of galaxy clusters is a crucial point for modern cosmology, and can be obtained by several different techniques. In this work we discuss a new method to measure the mass of galaxy clusters connecting the gravitational potential of the cluster with the kinematical properties of its surroundings. We explore the dynamics of the structures located in the region outside virialized cluster, We identify groups of galaxies, as sheets or filaments, in the cluster outer region, and model how the cluster gravitational potential perturbs the motion of these structures from the Hubble fow. This identification is done in the redshift space where we look for overdensities with a filamentary shape. Then we use a radial mean velocity profile that has been found as a quite universal trend in simulations, and we fit the radial infall velocity profile of the overdensities found. The method has been tested on several cluster-size haloes from cosmological N-body simulations giving results in very good agreement with the true values of virial masses of the haloes and orientation of the sheets. We then applied the method to the Coma cluster and even in this case we found a good correspondence with previous. It is possible to notice a mass discrepancy between sheets with different alignments respect to the center of the cluster. This difference can be used to reproduce the shape of the cluster, and to demonstrate that the spherical symmetry is not always a valid assumption. In fact, if the cluster is not spherical, sheets oriented along different axes should feel a slightly different gravitational potential, and so give different masses as result of the analysis described before. Even this estimation has been tested on cosmological simulations and then applied to Coma, showing the actual non-sphericity of this cluster.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computing the weighted geometric mean of large sparse matrices is an operation that tends to become rapidly intractable, when the size of the matrices involved grows. However, if we are not interested in the computation of the matrix function itself, but just in that of its product times a vector, the problem turns simpler and there is a chance to solve it even when the matrix mean would actually be impossible to compute. Our interest is motivated by the fact that this calculation has some practical applications, related to the preconditioning of some operators arising in domain decomposition of elliptic problems. In this thesis, we explore how such a computation can be efficiently performed. First, we exploit the properties of the weighted geometric mean and find several equivalent ways to express it through real powers of a matrix. Hence, we focus our attention on matrix powers and examine how well-known techniques can be adapted to the solution of the problem at hand. In particular, we consider two broad families of approaches for the computation of f(A) v, namely quadrature formulae and Krylov subspace methods, and generalize them to the pencil case f(A\B) v. Finally, we provide an extensive experimental evaluation of the proposed algorithms and also try to assess how convergence speed and execution time are influenced by some characteristics of the input matrices. Our results suggest that a few elements have some bearing on the performance and that, although there is no best choice in general, knowing the conditioning and the sparsity of the arguments beforehand can considerably help in choosing the best strategy to tackle the problem.