877 resultados para Cosmology - Large Scale Structure - Massive Neutrino - Bispectrum
Structure and dynamics of the Shapley Supercluster - Velocity catalogue, general morphology and mass
Resumo:
We present results of our wide-field redshift survey of galaxies in a 285 square degree region of the Shapley Supercluster (SSC), based on a set of 10 529 velocity measurements (including 1201 new ones) on 8632 galaxies obtained from various telescopes and from the literature. Our data reveal that the main plane of the SSC (v approximate to 14 500 km s(-1)) extends further than previous estimates, filling the whole extent of our survey region of 12 degrees by 30 degrees on the sky (30 x 75 h(-1) Mpc). There is also a connecting structure associated with the slightly nearer Abell 3571 cluster complex (v approximate to 12 000 km s(-1)). These galaxies seem to link two previously identified sheets of galaxies and establish a connection with a third one at v = 15 000 km s(-1) near RA = 13(h). They also tend to fill the gap of galaxies between the foreground Hydra-Centaurus region and the more distant SSC. In the velocity range of the Shapley Supercluster (9000 km s(-1) < cz < 18 000 km s(-1)), we found redshift-space overdensities with b(j) < 17.5 of similar or equal to 5.4 over the 225 square degree central region and similar or equal to 3.8 in a 192 square degree region excluding rich clusters. Over the large region of our survey, we find that the intercluster galaxies make up 48 per cent of the observed galaxies in the SSC region and, accounting for the different completeness, may contribute nearly twice as much mass as the cluster galaxies. In this paper, we discuss the completeness of the velocity catalogue, the morphology of the supercluster, the global overdensity, and some properties of the individual galaxy clusters in the Supercluster.
Resumo:
We studied superclusters of galaxies in a volume-limited sample extracted from the Sloan Digital Sky Survey Data Release 7 and from mock catalogues based on a semi-analytical model of galaxy evolution in the Millennium Simulation. A density field method was applied to a sample of galaxies brighter than M(r) = -21+5 log h(100) to identify superclusters, taking into account selection and boundary effects. In order to evaluate the influence of the threshold density, we have chosen two thresholds: the first maximizes the number of objects (D1) and the second constrains the maximum supercluster size to similar to 120 h(-1) Mpc (D2). We have performed a morphological analysis, using Minkowski Functionals, based on a parameter, which increases monotonically from filaments to pancakes. An anticorrelation was found between supercluster richness (and total luminosity or size) and the morphological parameter, indicating that filamentary structures tend to be richer, larger and more luminous than pancakes in both observed and mock catalogues. We have also used the mock samples to compare supercluster morphologies identified in position and velocity spaces, concluding that our morphological classification is not biased by the peculiar velocities. Monte Carlo simulations designed to investigate the reliability of our results with respect to random fluctuations show that these results are robust. Our analysis indicates that filaments and pancakes present different luminosity and size distributions.
Resumo:
Functionally relevant large scale brain dynamics operates within the framework imposed by anatomical connectivity and time delays due to finite transmission speeds. To gain insight on the reliability and comparability of large scale brain network simulations, we investigate the effects of variations in the anatomical connectivity. Two different sets of detailed global connectivity structures are explored, the first extracted from the CoCoMac database and rescaled to the spatial extent of the human brain, the second derived from white-matter tractography applied to diffusion spectrum imaging (DSI) for a human subject. We use the combination of graph theoretical measures of the connection matrices and numerical simulations to explicate the importance of both connectivity strength and delays in shaping dynamic behaviour. Our results demonstrate that the brain dynamics derived from the CoCoMac database are more complex and biologically more realistic than the one based on the DSI database. We propose that the reason for this difference is the absence of directed weights in the DSI connectivity matrix.
Resumo:
Evidence is presented of widespread changes in structure and species composition between the 1980s and 2003–2004 from surveys of 249 British broadleaved woodlands. Structural components examined include canopy cover, vertical vegetation profiles, field-layer cover and deadwood abundance. Woods were located in 13 geographical localities and the patterns of change were examined for each locality as well as across all woods. Changes were not uniform throughout the localities; overall, there were significant decreases in canopy cover and increases in sub-canopy (2–10 m) cover. Changes in 0.5–2 m vegetation cover showed strong geographic patterns, increasing in western localities, but declining or showing no change in eastern localities. There were significant increases in canopy ash Fraxinus excelsior and decreases in oak Quercus robur/petraea. Shrub layer ash and honeysuckle Lonicera periclymenum increased while birch Betula spp. hawthorn Crataegus monogyna and hazel Corylus avellana declined. Within the field layer, both bracken Pteridium aquilinum and herbs increased. Overall, deadwood generally increased. Changes were consistent with reductions in active woodland management and changes in grazing and browsing pressure. These findings have important implications for sustainable active management of British broadleaved woodlands to meet silvicultural and biodiversity objectives.
Resumo:
We compare the characteristics of synthetic European droughts generated by the HiGEM1 coupled climate model run with present day atmospheric composition with observed drought events extracted from the CRU TS3 data set. The results demonstrate consistency in both the rate of drought occurrence and the spatiotemporal structure of the events. Estimates of the probability density functions for event area, duration and severity are shown to be similar with confidence > 90%. Encouragingly, HiGEM is shown to replicate the extreme tails of the observed distributions and thus the most damaging European drought events. The soil moisture state is shown to play an important role in drought development. Once a large-scale drought has been initiated it is found to be 50% more likely to continue if the local soil moisture is below the 40th percentile. In response to increased concentrations of atmospheric CO2, the modelled droughts are found to increase in duration, area and severity. The drought response can be largely attributed to temperature driven changes in relative humidity. 1 HiGEM is based on the latest climate configuration of the Met Office Hadley Centre Unified Model (HadGEM1) with the horizontal resolution increased to 1.25 x 0.83 degrees in longitude and latitude in the atmosphere and 1/3 x 1/3 degrees in the ocean.
Resumo:
Redshift Space Distortions (RSD) are an apparent anisotropy in the distribution of galaxies due to their peculiar motion. These features are imprinted in the correlation function of galaxies, which describes how these structures distribute around each other. RSD can be represented by a distortions parameter $\beta$, which is strictly related to the growth of cosmic structures. For this reason, measurements of RSD can be exploited to give constraints on the cosmological parameters, such us for example the neutrino mass. Neutrinos are neutral subatomic particles that come with three flavours, the electron, the muon and the tau neutrino. Their mass differences can be measured in the oscillation experiments. Information on the absolute scale of neutrino mass can come from cosmology, since neutrinos leave a characteristic imprint on the large scale structure of the universe. The aim of this thesis is to provide constraints on the accuracy with which neutrino mass can be estimated when expoiting measurements of RSD. In particular we want to describe how the error on the neutrino mass estimate depends on three fundamental parameters of a galaxy redshift survey: the density of the catalogue, the bias of the sample considered and the volume observed. In doing this we make use of the BASICC Simulation from which we extract a series of dark matter halo catalogues, characterized by different value of bias, density and volume. This mock data are analysed via a Markov Chain Monte Carlo procedure, in order to estimate the neutrino mass fraction, using the software package CosmoMC, which has been conveniently modified. In this way we are able to extract a fitting formula describing our measurements, which can be used to forecast the precision reachable in future surveys like Euclid, using this kind of observations.
Resumo:
The function of a protein generally is determined by its three-dimensional (3D) structure. Thus, it would be useful to know the 3D structure of the thousands of protein sequences that are emerging from the many genome projects. To this end, fold assignment, comparative protein structure modeling, and model evaluation were automated completely. As an illustration, the method was applied to the proteins in the Saccharomyces cerevisiae (baker’s yeast) genome. It resulted in all-atom 3D models for substantial segments of 1,071 (17%) of the yeast proteins, only 40 of which have had their 3D structure determined experimentally. Of the 1,071 modeled yeast proteins, 236 were related clearly to a protein of known structure for the first time; 41 of these previously have not been characterized at all.
Resumo:
Vela X–1 is the prototype of the class of wind-fed accreting pulsars in high-mass X-ray binaries hosting a supergiant donor. We have analysed in a systematic way 10 years of INTEGRAL data of Vela X–1 (22–50 keV) and we found that when outside the X-ray eclipse, the source undergoes several luminosity drops where the hard X-rays luminosity goes below ∼3 × 1035 erg s−1, becoming undetected by INTEGRAL. These drops in the X-ray flux are usually referred to as ‘off-states’ in the literature. We have investigated the distribution of these off-states along the Vela X–1 ∼ 8.9 d orbit, finding that their orbital occurrence displays an asymmetric distribution, with a higher probability to observe an off-state near the pre-eclipse than during the post-eclipse. This asymmetry can be explained by scattering of hard X-rays in a region of ionized wind, able to reduce the source hard X-ray brightness preferentially near eclipse ingress. We associate this ionized large-scale wind structure with the photoionization wake produced by the interaction of the supergiant wind with the X-ray emission from the neutron star. We emphasize that this observational result could be obtained thanks to the accumulation of a decade of INTEGRAL data, with observations covering the whole orbit several times, allowing us to detect an asymmetric pattern in the orbital distribution of off-states in Vela X–1.
Resumo:
Cosmic voids are vast and underdense regions emerging between the elements of the cosmic web and dominating the large-scale structure of the Universe. Void number counts and density profiles have been demonstrated to provide powerful cosmological probes. Indeed, thanks to their low-density nature and they very large sizes, voids represent natural laboratories to test alternative dark energy scenarios, modifications of gravity and the presence of massive neutrinos. Despite the increasing use of cosmic voids in Cosmology, a commonly accepted definition for these objects has not yet been reached. For this reason, different void finding algorithms have been proposed during the years. Voids finder algorithms based on density or geometrical criteria are affected by intrinsic uncertainties. In recent years, new solutions have been explored to face these issues. The most interesting is based on the idea of identify void positions through the dynamics of the mass tracers, without performing any direct reconstruction of the density field. The goal of this Thesis is to provide a performing void finder algorithm based on dynamical criteria. The Back-in-time void finder (BitVF) we present use tracers as test particles and their orbits are reconstructed from their actual clustered configuration to an homogeneous and isotropic distribution, expected for the Universe early epoch. Once the displacement field is reconstructed, the density field is computed as its divergence. Consequently, void centres are identified as local minima of the field. In this Thesis work we applied the developed void finding algorithm to simulations. From the resulting void samples we computed different void statistics, comparing the results to those obtained with VIDE, the most popular void finder. BitVF proved to be able to produce a more reliable void samples than the VIDE ones. The BitVF algorithm will be a fundamental tool for precision cosmology, especially with upcoming galaxy-survey.
Resumo:
The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.
Resumo:
The complex interactions among endangered ecosystems, landowners` interests, and different models of land tenure and use, constitute an important series of challenges for those seeking to maintain and restore biodiversity and augment the flow of ecosystem services. Over the past 10 years, we have developed a data-based approach to address these challenges and to achieve medium and large-scale ecological restoration of riparian areas on private lands in the state of Sao Paulo, southeastern Brazil. Given varying motivations for ecological restoration, the location of riparian areas within landholdings, environmental zoning of different riparian areas, and best-practice restoration methods were developed for each situation. A total of 32 ongoing projects, covering 527,982 ha, were evaluated in large sugarcane farms and small mixed farms, and six different restoration techniques have been developed to help upscale the effort. Small mixed farms had higher portions of land requiring protection as riparian areas (13.3%), and lower forest cover of riparian areas (18.3%), than large sugarcane farms (10.0% and 36.9%, respectively for riparian areas and forest cover values). In both types of farms, forest fragments required some degree of restoration. Historical anthropogenic degradation has compromised forest ecosystem structure and functioning, despite their high-diversity of native tree and shrub species. Notably, land use patterns in riparian areas differed markedly. Large sugarcane farms had higher portions of riparian areas occupied by highly mechanized agriculture, abandoned fields, and anthropogenic wet fields created by siltation in water courses. In contrast, in small mixed crop farms, low or non-mechanized agriculture and pasturelands were predominant. Despite these differences, plantations of native tree species covering the entire area was by far the main restoration method needed both by large sugarcane farms (76.0%) and small mixed farms (92.4%), in view of the low resilience of target sites, reduced forest cover, and high fragmentation, all of which limit the potential for autogenic restoration. We propose that plantations should be carried out with a high-diversity of native species in order to create biologically viable restored forests, and to assist long-term biodiversity persistence at the landscape scale. Finally, we propose strategies to integrate the political, socio-economic and methodological aspects needed to upscale restoration efforts in tropical forest regions throughout Latin America and elsewhere. (C) 2010 Elsevier BA/. All rights reserved.
Resumo:
This paper describes the communication stack of the REMPLI system: a structure using power-lines and IPbased networks for communication, for data acquisition and control of energy distribution and consumption. It is furthermore prepared to use alternative communication media like GSM or analog modem connections. The REMPLI system provides communication service for existing applications, namely automated meter reading, energy billing and domotic applications. The communication stack, consisting of physical, network, transport, and application layer is described as well as the communication services provided by the system. We show how the peculiarities of the power-line communication influence the design of the communication stack, by introducing requirements to efficiently use the limited bandwidth, optimize traffic and implement fair use of the communication medium for the extensive communication partners.
Resumo:
Benefits of long-term monitoring have drawn considerable attention in healthcare. Since the acquired data provides an important source of information to clinicians and researchers, the choice for long-term monitoring studies has become frequent. However, long-term monitoring can result in massive datasets, which makes the analysis of the acquired biosignals a challenge. In this case, visualization, which is a key point in signal analysis, presents several limitations and the annotations handling in which some machine learning algorithms depend on, turn out to be a complex task. In order to overcome these problems a novel web-based application for biosignals visualization and annotation in a fast and user friendly way was developed. This was possible through the study and implementation of a visualization model. The main process of this model, the visualization process, comprised the constitution of the domain problem, the abstraction design, the development of a multilevel visualization and the study and choice of the visualization techniques that better communicate the information carried by the data. In a second process, the visual encoding variables were the study target. Finally, the improved interaction exploration techniques were implemented where the annotation handling stands out. Three case studies are presented and discussed and a usability study supports the reliability of the implemented work.
Resumo:
A large-scale inventory of trees > 10cm DBH was conducted in the upland "terra firme" rain forest of the Distrito Agropecuário da SUFRAMA (Manaus Free Zone Authority Agricultural District) approximately 65Km north of the city of Manaus (AM), Srasil. Thegeneral appearance and structure of the forest is described together with local topography and soil texture. Thepreliminary results of the Inventory provide a minimum estimate of 698 tree species in 53 families in the 40Km radius sampled, including 17 undescribed species. Themost numerically abundant families, Lecythidaceae, Leguminosae, 5apotaceae and Burseraceae as also among the most species rich families. One aspect of this diverse assemblage is the proliferation of species within certain genera, Including 26 genera In 17 families with 6 or more species or morphospecies. Most species have very low abundances of less than 1 tree per hectare. While more abundant species do exist at densities ranging up to a mean of 12 trees per ha, many have clumped distributions leading to great variation in local species abundance. The degree of similarity between hectare samples based int the Coefficient of Community similarity Index varies widely over different sample hectares for five ecologically different families. Soil texture apparently plays a significant role In determining species composition in the different one hectare plots examined while results for other variable were less consistent. Greater differences in similarity indices are found for comparisons with a one hectare sample within the same formation approximately 40Km to the south. It is concluded that homogeneity of tree community composition within this single large and diverse yet continuous upland forest formation can not be assumed.
Resumo:
The necessary information to distinguish a local inhomogeneous mass density field from its spatial average on a compact domain of the universe can be measured by relative information entropy. The Kullback-Leibler (KL) formula arises very naturally in this context, however, it provides a very complicated way to compute the mutual information between spatially separated but causally connected regions of the universe in a realistic, inhomogeneous model. To circumvent this issue, by considering a parametric extension of the KL measure, we develop a simple model to describe the mutual information which is entangled via the gravitational field equations. We show that the Tsallis relative entropy can be a good approximation in the case of small inhomogeneities, and for measuring the independent relative information inside the domain, we propose the R\'enyi relative entropy formula.