965 resultados para FINAL DATA RELEASE
Resumo:
We present a new catalogue of galaxy triplets derived from the Sloan Digital Sky Survey (SDSS) Data Release 7. The identification of systems was performed considering galaxies brighter than Mr=-20.5 and imposing constraints over the projected distances, radial velocity differences of neighbouring galaxies and isolation. To improve the identification of triplets, we employed a data pixelization scheme, which allows us to handle large amounts of data as in the SDSS photometric survey. Using spectroscopic and photometric data in the redshift range 0.01 =z= 0.40, we obtain 5901 triplet candidates. We have used a mock catalogue to analyse the completeness and contamination of our methods. The results show a high level of completeness ( 80 per cent) and low contamination ( 5 per cent). By using photometric and spectroscopic data, we have also addressed the effects of fibre collisions in the spectroscopic sample. We have defined an isolation criterion considering the distance of the triplet brightest galaxy to the closest neighbour cluster, to describe a global environment, as well as the galaxies within a fixed aperture, around the triplet brightest galaxy, to measure the local environment. The final catalogue comprises 1092 isolated triplets of galaxies in the redshift range 0.01 =z= 0.40. Our results show that photometric redshifts provide very useful information, allowing us to complete the sample of nearby systems whose detection is affected by fibre collisions, as well as extending the detection of triplets to large distances, where spectroscopic redshifts are not available.
Resumo:
We present a catalogue of galaxy photometric redshifts and k-corrections for the Sloan Digital Sky Survey Data Release 7 (SDSS-DR7), available on the World Wide Web. The photometric redshifts were estimated with an artificial neural network using five ugriz bands, concentration indices and Petrosian radii in the g and r bands. We have explored our redshift estimates with different training sets, thus concluding that the best choice for improving redshift accuracy comprises the main galaxy sample (MGS), the luminous red galaxies and the galaxies of active galactic nuclei covering the redshift range 0 < z < 0.3. For the MGS, the photometric redshift estimates agree with the spectroscopic values within rms = 0.0227. The distribution of photometric redshifts derived in the range 0 < z(phot) < 0.6 agrees well with the model predictions. k-corrections were derived by calibration of the k-correct_v4.2 code results for the MGS with the reference-frame (z = 0.1) (g - r) colours. We adopt a linear dependence of k-corrections on redshift and (g - r) colours that provide suitable distributions of luminosity and colours for galaxies up to redshift z(phot) = 0.6 comparable to the results in the literature. Thus, our k-correction estimate procedure is a powerful, low computational time algorithm capable of reproducing suitable results that can be used for testing galaxy properties at intermediate redshifts using the large SDSS data base.
Resumo:
Context. The ESO public survey VISTA variables in the Via Lactea (VVV) started in 2010. VVV targets 562 sq. deg in the Galactic bulge and an adjacent plane region and is expected to run for about five years. Aims. We describe the progress of the survey observations in the first observing season, the observing strategy, and quality of the data obtained. Methods. The observations are carried out on the 4-m VISTA telescope in the ZYJHK(s) filters. In addition to the multi-band imaging the variability monitoring campaign in the K-s filter has started. Data reduction is carried out using the pipeline at the Cambridge Astronomical Survey Unit. The photometric and astrometric calibration is performed via the numerous 2MASS sources observed in each pointing. Results. The first data release contains the aperture photometry and astrometric catalogues for 348 individual pointings in the ZYJHK(s) filters taken in the 2010 observing season. The typical image quality is similar to 0 ''.9-1 ''.0. The stringent photometric and image quality requirements of the survey are satisfied in 100% of the JHK(s) images in the disk area and 90% of the JHK(s) images in the bulge area. The completeness in the Z and Y images is 84% in the disk, and 40% in the bulge. The first season catalogues contain 1.28 x 10(8) stellar sources in the bulge and 1.68 x 10(8) in the disk area detected in at least one of the photometric bands. The combined, multi-band catalogues contain more than 1.63 x 10(8) stellar sources. About 10% of these are double detections because of overlapping adjacent pointings. These overlapping multiple detections are used to characterise the quality of the data. The images in the JHK(s) bands extend typically similar to 4 mag deeper than 2MASS. The magnitude limit and photometric quality depend strongly on crowding in the inner Galactic regions. The astrometry for K-s = 15-18 mag has rms similar to 35-175 mas. Conclusions. The VVV Survey data products offer a unique dataset to map the stellar populations in the Galactic bulge and the adjacent plane and provide an exciting new tool for the study of the structure, content, and star-formation history of our Galaxy, as well as for investigations of the newly discovered star clusters, star-forming regions in the disk, high proper motion stars, asteroids, planetary nebulae, and other interesting objects.
Resumo:
We analyse a sample of 71 triplets of luminous galaxies derived from the work of O’Mill et al. We compare the properties of triplets and their members with those of control samples of compact groups, the 10 brightest members of rich clusters and galaxies in pairs. The triplets are restricted to have members with spectroscopic redshifts in the range 0.01 ≤ z ≤ 0.14 and absolute r-band luminosities brighter than Mr = −20.5. For these member galaxies, we analyse the stellar mass content, the star formation rates, the Dn(4000) parameter and (Mg − Mr) colour index. Since galaxies in triplets may finally merge in a single system, we analyse different global properties of these systems. We calculate the probability that the properties of galaxies in triplets are strongly correlated. We also study total star formation activity and global colours, and define the triplet compactness as a measure of the percentage of the system total area that is filled by the light of member galaxies. We concentrate in the comparison of our results with those of compact groups to assess how the triplets are a natural extension of these compact systems. Our analysis suggests that triplet galaxy members behave similarly to compact group members and galaxies in rich clusters. We also find that systems comprising three blue, star-forming, young stellar population galaxies (blue triplets) are most probably real systems and not a chance configuration of interloping galaxies. The same holds for triplets composed of three red, non-star-forming galaxies, showing the correlation of galaxy properties in these systems. From the analysis of the triplet as a whole, we conclude that, at a given total stellar mass content, triplets show a total star formation activity and global colours similar to compact groups. However, blue triplets show a high total star formation activity with a lower stellar mass content. From an analysis of the compactness parameter of the systems we find that light is even more concentrated in triplets than in compact groups. We propose that triplets composed of three luminous galaxies, should not be considered as an analogous of galaxy pairs with a third extra member, but rather they are a natural extension of compact groups.
Resumo:
We used the H i data from the LAB Survey to map the ring-shaped gap in H i density that lies slightly outside the solar circle. Adopting R(0) = 7.5 kpc, we find an average gap radius of 8.3 kpc and an average gap width of 0.8 kpc. The characteristics of the H i gap correspond closely to the expected ones, as predicted by theory and by numerical simulations of the gas flow near the corotation resonance.
Resumo:
Cover title.
Resumo:
We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS-DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of 884,126 SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: 14 <= r <= 21 (85.2%) and r >= 19 (82.1%). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT, and Ball et al. We find that our FT classifier is comparable to or better in completeness over the full magnitude range 15 <= r <= 21, with much lower contamination than all but the Ball et al. classifier. At the faintest magnitudes (r > 19), our classifier is the only one that maintains high completeness (> 80%) while simultaneously achieving low contamination (similar to 2.5%). We also examine the SDSS parametric classifier (psfMag - modelMag) to see if the dividing line between stars and galaxies can be adjusted to improve the classifier. We find that currently stars in close pairs are often misclassified as galaxies, and suggest a new cut to improve the classifier. Finally, we apply our FT classifier to separate stars from galaxies in the full set of 69,545,326 SDSS photometric objects in the magnitude range 14 <= r <= 21.
Resumo:
The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalogue will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigmbut without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.
Resumo:
The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalogue will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigmbut without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Spatial data representation and compression has become a focus issue in computer graphics and image processing applications. Quadtrees, as one of hierarchical data structures, basing on the principle of recursive decomposition of space, always offer a compact and efficient representation of an image. For a given image, the choice of quadtree root node plays an important role in its quadtree representation and final data compression. The goal of this thesis is to present a heuristic algorithm for finding a root node of a region quadtree, which is able to reduce the number of leaf nodes when compared with the standard quadtree decomposition. The empirical results indicate that, this proposed algorithm has quadtree representation and data compression improvement when in comparison with the traditional method.
Resumo:
This study aimed to focus on aspects of public administration concerning the implementation of the public policy of complementary blood collection by the itinerant and scheduled PPCCIPS services either trough off local unity or mobile unit blood collection operations, which are managed by the State Institute of Hematology Arthur de Siqueira Cavalcanti - HEMORIO. The case study method was used in that public health institutional field, in search for a better understanding of responsibilities and management related to collection, serology, fractionation, storage and distribution of blood supply to almost all public hospitals and clinics, summed up to agreements with the single health system of the State of Rio de Janeiro. Bibliographic references, documentary and field data obtained through interviews and systematic observation in the public servants of HEMORIO workplaces, were treated by the analysis of the content method and the results of this research revealed the complexity of those services, and needs in outstanding aspects of infrastructure, equipment, logistics and personnel, which are critical for the achievement of the increased public collection of blood in the Rio de Janeiro State, endorsing the suggestions for the implementation of PPCCIPS in HEMORIO. The main point found in this research results concern the immanent ethical commitment of that public service personnel, including staff members and low ranking members as well, perceived due to a brief philosophical overlook on that personnel¿s attitudes. An important strategic aspect was revealed by the need for excellence of midia communications and education programs to implement the community involvement in the whole process. Final reflections point out that personnel posture is considered vital for the quality of the expected care of the technical activities and also for the quality of its final products release to the local public, fluminense, which is the irreplaceable human blood, and their derivatives. Despite the author¿s effort in this dissertation there is much more to be studied on that crucial theme.
Resumo:
O debate sobre os determinantes do comportamento perdura desde a Antiguidade, sendo usualmente estruturado dicotomicamente. A tendência atual de compreensão de determinação comportamental direciona-se para o interacionismo, analisando as influências genéticas, biológicas e ambientais sobre o produto final. Várias pesquisas empíricas têm sido conduzidas para identificar a quais fatores se deve a emissão de um comportamento específico. Em virtude da impossibilidade de estudar por completo os determinantes do comportamento humano, optou-se pelo recorte de um padrão específico o comportamento homossexual. Desde a antiguidade até a atualidade, os determinantes do comportamento homossexual têm sido alvo de debates. Além disso, este é um tema relacionado a um número expressivo de indivíduos na população e possui implicações sociais importantes a partir dos achados científicos na área. O presente trabalho objetivou analisar quais as evidências empíricas existentes acerca da determinação do comportamento homossexual, a partir de três etapas gerais: (1) a evolução histórica do debate sobre a determinação do comportamento, destacando as principais metodologias empregadas nessa trajetória; (2) apresentação e discussão das principais linhas de pesquisa sobre determinação do comportamento homossexual, enfatizando a análise crítica dos dados obtidos; (3) discussão das implicações das pesquisas apresentadas e possíveis encaminhamentos empíricos. Foi realizado um amplo levantamento bibliográfico, com ênfase em trabalhos empíricos abordando os determinantes do comportamento homossexual. Foram identificadas seis linhas de pesquisa principais, categorizadas como: medidas hormonais, efeitos hormonais, genética, funcionamento cerebral, modelos animais e efeitos ambientais. A metodologia e os resultados de cada pesquisa apresentada foram analisados. A partir da análise realizada, pôde-se discutir as influências políticas na pesquisa científica, as implicações éticas da divulgação dos resultados e organizar os dados existentes em uma proposta de compreensão do fenômeno. Espera-se contribuir para uma descrição do panorama geral do estudo dos determinantes do comportamento homossexual bem como para uma postura crítica frente às metodologias utilizadas e as conclusões veiculadas.
Volcanic forcing for climate modeling: a new microphysics-based data set covering years 1600–present
Resumo:
As the understanding and representation of the impacts of volcanic eruptions on climate have improved in the last decades, uncertainties in the stratospheric aerosol forcing from large eruptions are now linked not only to visible optical depth estimates on a global scale but also to details on the size, latitude and altitude distributions of the stratospheric aerosols. Based on our understanding of these uncertainties, we propose a new model-based approach to generating a volcanic forcing for general circulation model (GCM) and chemistry–climate model (CCM) simulations. This new volcanic forcing, covering the 1600–present period, uses an aerosol microphysical model to provide a realistic, physically consistent treatment of the stratospheric sulfate aerosols. Twenty-six eruptions were modeled individually using the latest available ice cores aerosol mass estimates and historical data on the latitude and date of eruptions. The evolution of aerosol spatial and size distribution after the sulfur dioxide discharge are hence characterized for each volcanic eruption. Large variations are seen in hemispheric partitioning and size distributions in relation to location/date of eruptions and injected SO2 masses. Results for recent eruptions show reasonable agreement with observations. By providing these new estimates of spatial distributions of shortwave and long-wave radiative perturbations, this volcanic forcing may help to better constrain the climate model responses to volcanic eruptions in the 1600–present period. The final data set consists of 3-D values (with constant longitude) of spectrally resolved extinction coefficients, single scattering albedos and asymmetry factors calculated for different wavelength bands upon request. Surface area densities for heterogeneous chemistry are also provided.
Resumo:
A unique macroseismic data set for the strongest earthquakes occurred since 1940 in Vrancea region, is constructed by a thorough review of all available sources. Inconsistencies and errors in the reported data and in their use are analyzed as well. The final data set, free from inconsistencies, including those at the political borders, contains 9822 observations for the strong intermediate-depth earthquakes: 1940, Mw=7.7; 1977, Mw=7.4; 1986, Mw=7.1; 1990, May 30, Mw=6.9 and 1990, May 31, Mw=6.4; 2004, Mw=6.0. This data set is available electronically as supplementary data for the present paper. From the discrete macroseismic data the continuous macroseismic field is generated using the methodology developed by Molchan et al. (2002) that, along with the unconventional smoothing method Modified Polynomial Filtering (MPF), uses the Diffused Boundary (DB) method, which visualizes the uncertainty in the isoseismal's boundaries. The comparison of DBs with previous isoseismals maps represents a good evaluation criterion of the reliability of earlier published maps. The produced isoseismals can be used not only for the formal comparison between observed and theoretical isoseismals, but also for the retrieval of source properties and the assessment of local responses (Molchan et al., 2011).