182 resultados para Dark objects method
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
Aims. In this work, we describe the pipeline for the fast supervised classification of light curves observed by the CoRoT exoplanet CCDs. We present the classification results obtained for the first four measured fields, which represent a one-year in-orbit operation. Methods. The basis of the adopted supervised classification methodology has been described in detail in a previous paper, as is its application to the OGLE database. Here, we present the modifications of the algorithms and of the training set to optimize the performance when applied to the CoRoT data. Results. Classification results are presented for the observed fields IRa01, SRc01, LRc01, and LRa01 of the CoRoT mission. Statistics on the number of variables and the number of objects per class are given and typical light curves of high-probability candidates are shown. We also report on new stellar variability types discovered in the CoRoT data. The full classification results are publicly available.
Resumo:
The abundance and distribution of collapsed objects such as galaxy clusters will become an important tool to investigate the nature of dark energy and dark matter. Number counts of very massive objects are sensitive not only to the equation of state of dark energy, which parametrizes the smooth component of its pressure, but also to the sound speed of dark energy, which determines the amount of pressure in inhomogeneous and collapsed structures. Since the evolution of these structures must be followed well into the nonlinear regime, and a fully relativistic framework for this regime does not exist yet, we compare two approximate schemes: the widely used spherical collapse model and the pseudo-Newtonian approach. We show that both approximation schemes convey identical equations for the density contrast, when the pressure perturbation of dark energy is parametrized in terms of an effective sound speed. We also make a comparison of these approximate approaches to general relativity in the linearized regime, which lends some support to the approximations.
Resumo:
A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort.
Resumo:
We studied superclusters of galaxies in a volume-limited sample extracted from the Sloan Digital Sky Survey Data Release 7 and from mock catalogues based on a semi-analytical model of galaxy evolution in the Millennium Simulation. A density field method was applied to a sample of galaxies brighter than M(r) = -21+5 log h(100) to identify superclusters, taking into account selection and boundary effects. In order to evaluate the influence of the threshold density, we have chosen two thresholds: the first maximizes the number of objects (D1) and the second constrains the maximum supercluster size to similar to 120 h(-1) Mpc (D2). We have performed a morphological analysis, using Minkowski Functionals, based on a parameter, which increases monotonically from filaments to pancakes. An anticorrelation was found between supercluster richness (and total luminosity or size) and the morphological parameter, indicating that filamentary structures tend to be richer, larger and more luminous than pancakes in both observed and mock catalogues. We have also used the mock samples to compare supercluster morphologies identified in position and velocity spaces, concluding that our morphological classification is not biased by the peculiar velocities. Monte Carlo simulations designed to investigate the reliability of our results with respect to random fluctuations show that these results are robust. Our analysis indicates that filaments and pancakes present different luminosity and size distributions.
Resumo:
Cosmic shear requires high precision measurement of galaxy shapes in the presence of the observational point spread function (PSF) that smears out the image. The PSF must therefore be known for each galaxy to a high accuracy. However, for several reasons, the PSF is usually wavelength dependent; therefore, the differences between the spectral energy distribution of the observed objects introduce further complexity. In this paper, we investigate the effect of the wavelength dependence of the PSF, focusing on instruments in which the PSF size is dominated by the diffraction limit of the telescope and which use broad-band filters for shape measurement. We first calculate biases on cosmological parameter estimation from cosmic shear when the stellar PSF is used uncorrected. Using realistic galaxy and star spectral energy distributions and populations and a simple three-component circular PSF, we find that the colour dependence must be taken into account for the next generation of telescopes. We then consider two different methods for removing the effect: (i) the use of stars of the same colour as the galaxies and (ii) estimation of the galaxy spectral energy distribution using multiple colours and using a telescope model for the PSF. We find that both of these methods correct the effect to levels below the tolerances required for per cent level measurements of dark energy parameters. Comparison of the two methods favours the template-fitting method because its efficiency is less dependent on galaxy redshift than the broad-band colour method and takes full advantage of deeper photometry.
Resumo:
In this paper, we construct a dynamic portrait of the inner asteroidal belt. We use information about the distribution of test particles, which were initially placed on a perfectly rectangular grid of initial conditions, after 4.2 Myr of gravitational interactions with the Sun and five planets, from Mars to Neptune. Using the spectral analysis method introduced by Michtchenko et al., the asteroidal behaviour is illustrated in detail on the dynamical, averaged and frequency maps. On the averaged and frequency maps, we superpose information on the proper elements and proper frequencies of real objects, extracted from the data base, AstDyS, constructed by Milani and Knezevic. A comparison of the maps with the distribution of real objects allows us to detect possible dynamical mechanisms acting in the domain under study; these mechanisms are related to mean-motion and secular resonances. We note that the two- and three-body mean-motion resonances and the secular resonances (strong linear and weaker non-linear) have an important role in the diffusive transportation of the objects. Their long-lasting action, overlaid with the Yarkovsky effect, may explain many observed features of the density, size and taxonomic distributions of the asteroids.
Resumo:
The most significant radiation field nonuniformity is the well-known Heel effect. This nonuniform beam effect has a negative influence on the results of computer-aided diagnosis of mammograms, which is frequently used for early cancer detection. This paper presents a method to correct all pixels in the mammography image according to the excess or lack on radiation to which these have been submitted as a result of the this effect. The current simulation method calculates the intensities at all points of the image plane. In the simulated image, the percentage of radiation received by all the points takes the center of the field as reference. In the digitized mammography, the percentages of the optical density of all the pixels of the analyzed image are also calculated. The Heel effect causes a Gaussian distribution around the anode-cathode axis and a logarithmic distribution parallel to this axis. Those characteristic distributions are used to determine the center of the radiation field as well as the cathode-anode axis, allowing for the automatic determination of the correlation between these two sets of data. The measurements obtained with our proposed method differs on average by 2.49 mm in the direction perpendicular to the anode-cathode axis and 2.02 mm parallel to the anode-cathode axis of commercial equipment. The method eliminates around 94% of the Heel effect in the radiological image and the objects will reflect their x-ray absorption. To evaluate this method, experimental data was taken from known objects, but could also be done with clinical and digital images.
Resumo:
Models of dynamical dark energy unavoidably possess fluctuations in the energy density and pressure of that new component. In this paper we estimate the impact of dark energy fluctuations on the number of galaxy clusters in the Universe using a generalization of the spherical collapse model and the Press-Schechter formalism. The observations we consider are several hypothetical Sunyaev-Zel`dovich and weak lensing (shear maps) cluster surveys, with limiting masses similar to ongoing (SPT, DES) as well as future (LSST, Euclid) surveys. Our statistical analysis is performed in a 7-dimensional cosmological parameter space using the Fisher matrix method. We find that, in some scenarios, the impact of these fluctuations is large enough that their effect could already be detected by existing instruments such as the South Pole Telescope, when priors from other standard cosmological probes are included. We also show how dark energy fluctuations can be a nuisance for constraining cosmological parameters with cluster counts, and point to a degeneracy between the parameter that describes dark energy pressure on small scales (the effective sound speed) and the parameters describing its equation of state.
Resumo:
The count intercept is a robust method for the numerical analysis of fabrics Launeau and Robin (1996). It counts the number of intersections between a set of parallel scan lines and a mineral phase, which must be identified on a digital image. However, the method is only sensitive to boundaries and therefore supposes the user has some knowledge about their significance. The aim of this paper is to show that a proper grey level detection of boundaries along scan lines is sufficient to calculate the two-dimensional anisotropy of grain or crystal distributions without any particular image processing. Populations of grains and crystals usually display elliptical anisotropies in rocks. When confirmed by the intercept analysis, a combination of a minimum of 3 mean length intercept roses, taken on 3 more or less perpendicular sections, allows the calculation of 3-dimensional ellipsoids and the determination of their standard deviation with direction and intensity in 3 dimensions as well. The feasibility of this quick method is attested by numerous examples on theoretical objects deformed by active and passive deformation, on BSE images of synthetic magma flow, on drawing or direct analysis of thin section pictures of sandstones and on digital images of granites directly taken and measured in the field. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Despite the advances in bonding materials, many clinicians today still prefer to place bands on molar teeth. Molar bonding procedures need improvement to be widely accepted clinically. OBJECTIVE: The purpose of this study was to evaluate the shear bond strength when an additional adhesive layer was applied on the occlusal tooth/tube interface to provide reinforcement to molar tubes. MATERIAL AND METHODS: Sixty third molars were selected and allocated to the 3 groups: group 1 received a conventional direct bond followed by the application of an additional layer of adhesive on the occlusal tooth/tube interface, group 2 received a conventional direct bond, and group 3 received a conventional direct bond and an additional cure time of 10 s. The specimens were debonded in a universal testing machine. The results were analyzed statistically by ANOVA and Tukey's test (α=0.05). RESULTS: Group 1 had a significantly higher (p<0.05) shear bond strength compared to groups 2 and 3. No difference was detected between groups 2 and 3 (p>0.05). CONCLUSIONS: The present in vitro findings indicate that the application of an additional layer of adhesive on the tooth/tube interface increased the shear bond strength of the bonded molar tubes.
Resumo:
The aims of this study were to demonstrate the synthesis of an experimental glass ionomer cement (GIC) by the non-hydrolytic sol-gel method and to evaluate its biocompatibility in comparison to a conventional glass ionomer cement (Vidrion R). Four polyethylene tubes containing the tested cements were implanted in the dorsal region of 15 rats, as follows: GI - experimental GIC and GII - conventional GIC. The external tube walls was considered the control group (CG). The rats were sacrificed 7, 21 and 42 days after implant placement for histopathological analysis. A four-point (I-IV) scoring system was used to graduate the inflammatory reaction. Regarding the experimental GIC sintherization, thermogravimetric and x-ray diffraction analysis demonstrated vitreous material formation at 110oC by the sol-gel method. For biocompatibility test, results showed a moderate chronic inflammatory reaction for GI (III), severe for GII (IV) and mild for CG (II) at 7 days. After 21 days, GI presented a mild reaction (II); GII, moderate (III) and CG, mild (II). At 42 days, GI showed a mild/absent inflammatory reaction (II to I), similar to GII (II to I). CG presented absence of chronic inflammatory reaction (I). It was concluded that the experimental GIC presented mild/absent tissue reaction after 42 days, being biocompatible when tested in the connective tissue of rats.
Resumo:
This article describes and discusses a method to determine root curvature radius by using cone-beam computed tomography (CBCT). The severity of root canal curvature is essential to select instrument and instrumentation technique. The diagnosis and planning of root canal treatment have traditionally been made based on periapical radiography. However, the higher accuracy of CBCT images to identify anatomic and pathologic alterations compared to panoramic and periapical radiographs has been shown to reduce the incidence of false-negative results. In high-resolution images, the measurement of root curvature radius can be obtained by circumcenter. Based on 3 mathematical points determined with the working tools of Planimp® software, it is possible to calculate root curvature radius in both apical and coronal directions. The CBCT-aided method for determination of root curvature radius presented in this article is easy to perform, reproducible and allows a more reliable and predictable endodontic planning, which reflects directly on a more efficacious preparation of curved root canals.
Resumo:
OBJECTIVE: To assess microleakage in conservative class V cavities prepared with aluminum-oxide air abrasion or turbine and restored with self-etching or etch-and-rinse adhesive systems. Materials and Methods: Forty premolars were randomly assigned to 4 groups (I and II: air abrasion; III and IV: turbine) and class V cavities were prepared on the buccal surfaces. Conditioning approaches were: groups I/III - 37% phosphoric acid; groups II/IV - self-priming etchant (Tyrian-SPE). Cavities were restored with One Step Plus/Filtek Z250. After finishing, specimens were thermocycled, immersed in 50% silver nitrate, and serially sectioned. Microleakage at the occlusal and cervical interfaces was measured in mm and calculated by a software. Data were subjected to ANOVA and Tukey's test (α=0.05). RESULTS: Marginal seal provided by air abrasion was similar to high-speed handpiece, except for group I. There was SIGNIFICANT difference between enamel and dentin/cementum margins for to group I and II: air abrasion. The etch-and-rinse adhesive system promoted a better marginal seal. At enamel and dentin/cementum margins, the highest microleakage values were found in cavities treated with the self-etching adhesive system. At dentin/cementum margins, high-speed handpiece preparations associated with etch-and-rinse system provided the least dye penetration. CONCLUSION: Marginal seal of cavities prepared with aluminum-oxide air abrasion was different from that of conventionally prepared cavities, and the etch-and-rinse system promoted higher marginal seal at both enamel and dentin margins.
Resumo:
The aim of this study was to assess the Knoop hardness of three high viscous glass ionomer cements: G1 - Ketac Molar; G2 - Ketac Molar Easymix (3M ESPE) and G3 - Magic Glass ART (Vigodent). As a parallel goal, three different methods for insertion of Ketac Molar Easymix were tested: G4 - conventional spatula; G5 - commercial syringe (Centrix) and G6 - low-cost syringe. Ten specimens of each group were prepared and the Knoop hardness was determined 5 times on each specimen with a HM-124 hardness machine (25 g/30 s dwell time) after 24 h, 1 and 2 weeks. During the entire test period, the specimens were stored in liquid paraffin at 37ºC. Significant differences were found between G3 and G1/G2 (two-way ANOVA and Tukey's post hoc test; p<0.01). There was no significant difference in the results among the multiple ways of insertion. The glass ionomer cement Magic Glass ART showed the lowest hardness, while the insertion technique had no significant influence on hardness.
Resumo:
A simple and low cost method to determine volatile contaminants in post-consumer recycled PET flakes was developed and validated by Headspace Dynamic Concentration and Gas Chromatography-Flame Ionization Detection (HDC-GC-FID). The analytical parameters evaluated by using surrogates include: correlation coefficient, detection limit, quantification limit, accuracy, intra-assay precision, and inter-assay precision. In order to compare the efficiency of the proposed method to recognized automated techniques, post-consumer PET packaging samples collected in Brazil were used. GC-MS was used to confirm the identity of the substances identified in the PET packaging. Some of the identified contaminants were estimated in the post-consumer material at concentrations higher than 220 ng.g-1. The findings in this work corroborate data available in the scientific literature pointing out the suitability of the proposed analytical method.