14 resultados para Target field method
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
We studied superclusters of galaxies in a volume-limited sample extracted from the Sloan Digital Sky Survey Data Release 7 and from mock catalogues based on a semi-analytical model of galaxy evolution in the Millennium Simulation. A density field method was applied to a sample of galaxies brighter than M(r) = -21+5 log h(100) to identify superclusters, taking into account selection and boundary effects. In order to evaluate the influence of the threshold density, we have chosen two thresholds: the first maximizes the number of objects (D1) and the second constrains the maximum supercluster size to similar to 120 h(-1) Mpc (D2). We have performed a morphological analysis, using Minkowski Functionals, based on a parameter, which increases monotonically from filaments to pancakes. An anticorrelation was found between supercluster richness (and total luminosity or size) and the morphological parameter, indicating that filamentary structures tend to be richer, larger and more luminous than pancakes in both observed and mock catalogues. We have also used the mock samples to compare supercluster morphologies identified in position and velocity spaces, concluding that our morphological classification is not biased by the peculiar velocities. Monte Carlo simulations designed to investigate the reliability of our results with respect to random fluctuations show that these results are robust. Our analysis indicates that filaments and pancakes present different luminosity and size distributions.
Resumo:
A finite difference technique, based on a projection method, is developed for solving the dynamic three-dimensional Ericksen-Leslie equations for nematic liquid crystals subject to a strong magnetic field. The governing equations in this situation are derived using primitive variables and are solved using the ideas behind the GENSMAC methodology (Tome and McKee [32]; Tome et al. [34]). The resulting numerical technique is then validated by comparing the numerical solution against an analytic solution for steady three-dimensional flow between two-parallel plates subject to a strong magnetic field. The validated code is then employed to solve channel flow for which there is no analytic solution. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We discuss the generalized eigenvalue problem for computing energies and matrix elements in lattice gauge theory, including effective theories such as HQET. It is analyzed how the extracted effective energies and matrix elements converge when the time separations are made large. This suggests a particularly efficient application of the method for which we can prove that corrections vanish asymptotically as exp(-(E(N+1) - E(n))t). The gap E(N+1) - E(n) can be made large by increasing the number N of interpolating fields in the correlation matrix. We also show how excited state matrix elements can be extracted such that contaminations from all other states disappear exponentially in time. As a demonstration we present numerical results for the extraction of ground state and excited B-meson masses and decay constants in static approximation and to order 1/m(b) in HQET.
Resumo:
The objective of this study was to test a device developed to improve the functionality, accuracy and precision of the original technique for sweating rate measurements proposed by Schleger and Turner [Schleger AV, Turner HG (1965) Aust J Agric Res 16:92-106]. A device was built for this purpose and tested against the original Schleger and Turner technique. Testing was performed by measuring sweating rates in an experiment involving six Mertolenga heifers subjected to four different thermal levels in a climatic chamber. The device exhibited no functional problems and the results obtained with its use were more consistent than with the Schleger and Turner technique. There was no difference in the reproducibility of the two techniques (same accuracy), but measurements performed with the new device had lower repeatability, corresponding to lower variability and, consequently, to higher precision. When utilizing this device, there is no need for physical contact between the operator and the animal to maintain the filter paper discs in position. This has important advantages: the animals stay quieter, and several animals can be evaluated simultaneously. This is a major advantage because it allows more measurements to be taken in a given period of time, increasing the precision of the observations and diminishing the error associated with temporal hiatus (e.g., the solar angle during field studies). The new device has higher functional versatility when taking measurements in large-scale studies (many animals) under field conditions. The results obtained in this study suggest that the technique using the device presented here could represent an advantageous alternative to the original technique described by Schleger and Turner.
Resumo:
Sea surface gradients derived from the Geosat and ERS-1 satellite altimetry geodetic missions were integrated with marine gravity data from the National Geophysical Data Center and Brazilian national surveys. Using the least squares collocation method, models of free-air gravity anomaly and geoid height were calculated for the coast of Brazil with a resolution of 2` x 2`. The integration of satellite and shipborne data showed better statistical results in regions near the coast than using satellite data only, suggesting an improvement when compared to the state-of-the-art global gravity models. Furthermore, these results were obtained with considerably less input information than was used by those reference models. The least squares collocation presented a very low content of high-frequency noise in the predicted gravity anomalies. This may be considered essential to improve the high resolution representation of the gravity field in regions of ocean-continent transition. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Definition of the long-term variation of the geomagnetic virtual dipole moment requires more reliable paleointensity results. Here, we applied a multisample protocol to the study of the 130.5 Ma Ponta Grossa basaltic dikes (southern Brazil) that carry a very stable dual-polarity magnetic component. The magnetic stability of the samples wits checked using thermomagnetic curves and by monitoring the magnetic Susceptibility evolution through the paleointensity experiments. Twelve sites containing the least alterable samples were chosen for the paleointensity measurements. Although these rocks failed stepwise double-heating experiments, they yielded coherent results in the multisample method for all sites but one. The coherent sites show low to moderate field intensities between 5.7 +/- 0.2 and 26.4 +/- 0.7 mu T (average 13.4 +/- 1.9 mu T). Virtual dipole moments for these sites range from 1.3 +/- 0.04 to 6.0 +/- 0.2 x 10(22) A m(2) (average 2.9 +/- 0.5 x 10(22) A m(2)). Our results agree with the tendency for low dipole moments during the Early Cretaceous, immediately prior to the Cretaceous Normal Superchron (CNS). The available paleointensity database shows a strong variability of the field between 80 and 160 Ma. There seems to be no firm evidence for a Mesozoic Dipole Low, but a long-term tendency does emerge from the data with the highest dipole moments Occurring at the middle of the CNS.
Resumo:
We introduce a flexible technique for interactive exploration of vector field data through classification derived from user-specified feature templates. Our method is founded on the observation that, while similar features within the vector field may be spatially disparate, they share similar neighborhood characteristics. Users generate feature-based visualizations by interactively highlighting well-accepted and domain specific representative feature points. Feature exploration begins with the computation of attributes that describe the neighborhood of each sample within the input vector field. Compilation of these attributes forms a representation of the vector field samples in the attribute space. We project the attribute points onto the canonical 2D plane to enable interactive exploration of the vector field using a painting interface. The projection encodes the similarities between vector field points within the distances computed between their associated attribute points. The proposed method is performed at interactive rates for enhanced user experience and is completely flexible as showcased by the simultaneous identification of diverse feature types.
Resumo:
The most significant radiation field nonuniformity is the well-known Heel effect. This nonuniform beam effect has a negative influence on the results of computer-aided diagnosis of mammograms, which is frequently used for early cancer detection. This paper presents a method to correct all pixels in the mammography image according to the excess or lack on radiation to which these have been submitted as a result of the this effect. The current simulation method calculates the intensities at all points of the image plane. In the simulated image, the percentage of radiation received by all the points takes the center of the field as reference. In the digitized mammography, the percentages of the optical density of all the pixels of the analyzed image are also calculated. The Heel effect causes a Gaussian distribution around the anode-cathode axis and a logarithmic distribution parallel to this axis. Those characteristic distributions are used to determine the center of the radiation field as well as the cathode-anode axis, allowing for the automatic determination of the correlation between these two sets of data. The measurements obtained with our proposed method differs on average by 2.49 mm in the direction perpendicular to the anode-cathode axis and 2.02 mm parallel to the anode-cathode axis of commercial equipment. The method eliminates around 94% of the Heel effect in the radiological image and the objects will reflect their x-ray absorption. To evaluate this method, experimental data was taken from known objects, but could also be done with clinical and digital images.
Resumo:
A statistical data analysis methodology was developed to evaluate the field emission properties of many samples of copper oxide nanostructured field emitters. This analysis was largely done in terms of Seppen-Katamuki (SK) charts, field strength and emission current. Some physical and mathematical models were derived to describe the effect of small electric field perturbations in the Fowler-Nordheim (F-N) equation, and then to explain the trend of the data represented in the SK charts. The field enhancement factor and the emission area parameters showed to be very sensitive to variations in the electric field for most of the samples. We have found that the anode-cathode distance is critical in the field emission characterization of samples having a non-rigid nanostructure. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
In this communication, we report on the formation of calcium hexahydroxodizincate dehydrate, CaZn(2)(OH)(6)center dot 2H(2)O (CZO) powders under microwave-hydrothermal (MH) conditions. These powders were analyzed by X-ray diffraction (XRD), Field-emission gum scanning electron microscopy (FEG-SEM), ultraviolet-visible (UV-vis) absorption spectroscopy and photoluminescence (PL) measurements. XRD patterns confirmed that the pure CZO phase was obtained after MH processing performed at 130 degrees C for 2 h. FEG-SEM micrographs indicated that the morphological modifications as well as the growth of CZO microparticles are governed by Ostwald-ripening and coalescence mechanisms. UV-vis spectra showed that this material have an indirect optical band gap. The pure CZO powders exhibited an yellow PL emission when excited by 350 nm wavelength at room temperature. (C) 2009 Elsevier Masson SAS. All rights reserved.
Resumo:
Several protease inhibitors have reached the world market in the last fifteen years, dramatically improving the quality of life and life expectancy of millions of HIV-infected patients. In spite of the tremendous research efforts in this area, resistant HIV-1 variants are constantly decreasing the ability of the drugs to efficiently inhibit the enzyme. As a consequence, inhibitors with novel frameworks are necessary to circumvent resistance to chemotherapy. In the present work, we have created 3D QSAR models for a series of 82 HIV-1 protease inhibitors employing the comparative molecular field analysis (CoMFA) method. Significant correlation coefficients were obtained (q(2) = 0.82 and r(2) = 0.97), indicating the internal consistency of the best model, which was then used to evaluate an external test set containing 17 compounds. The predicted values were in good agreement with the experimental results, showing the robustness of the model and its substantial predictive power for untested compounds. The final QSAR model and the information gathered from the CoMFA contour maps should be useful for the design of novel anti-HIV agents with improved potency.
Resumo:
The count intercept is a robust method for the numerical analysis of fabrics Launeau and Robin (1996). It counts the number of intersections between a set of parallel scan lines and a mineral phase, which must be identified on a digital image. However, the method is only sensitive to boundaries and therefore supposes the user has some knowledge about their significance. The aim of this paper is to show that a proper grey level detection of boundaries along scan lines is sufficient to calculate the two-dimensional anisotropy of grain or crystal distributions without any particular image processing. Populations of grains and crystals usually display elliptical anisotropies in rocks. When confirmed by the intercept analysis, a combination of a minimum of 3 mean length intercept roses, taken on 3 more or less perpendicular sections, allows the calculation of 3-dimensional ellipsoids and the determination of their standard deviation with direction and intensity in 3 dimensions as well. The feasibility of this quick method is attested by numerous examples on theoretical objects deformed by active and passive deformation, on BSE images of synthetic magma flow, on drawing or direct analysis of thin section pictures of sandstones and on digital images of granites directly taken and measured in the field. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We present an efficient numerical methodology for the 31) computation of incompressible multi-phase flows described by conservative phase-field models We focus here on the case of density matched fluids with different viscosity (Model H) The numerical method employs adaptive mesh refinements (AMR) in concert with an efficient semi-implicit time discretization strategy and a linear, multi-level multigrid to relax high order stability constraints and to capture the flow`s disparate scales at optimal cost. Only five linear solvers are needed per time-step. Moreover, all the adaptive methodology is constructed from scratch to allow a systematic investigation of the key aspects of AMR in a conservative, phase-field setting. We validate the method and demonstrate its capabilities and efficacy with important examples of drop deformation, Kelvin-Helmholtz instability, and flow-induced drop coalescence (C) 2010 Elsevier Inc. All rights reserved
Resumo:
A myriad of methods are available for virtual screening of small organic compound databases. In this study we have successfully applied a quantitative model of consensus measurements, using a combination of 3D similarity searches (ROCS and EON), Hologram Quantitative Structure Activity Relationships (HQSAR) and docking (FRED, FlexX, Glide and AutoDock Vina), to retrieve cruzain inhibitors from collected databases. All methods were assessed individually and then combined in a Ligand-Based Virtual Screening (LBVS) and Target-Based Virtual Screening (TBVS) consensus scoring, using Receiving Operating Characteristic (ROC) curves to evaluate their performance. Three consensus strategies were used: scaled-rank-by-number, rank-by-rank and rank-by-vote, with the most thriving the scaled-rank-by-number strategy, considering that the stiff ROC curve appeared to be satisfactory in every way to indicate a higher enrichment power at early retrieval of active compounds from the database. The ligand-based method provided access to a robust and predictive HQSAR model that was developed to show superior discrimination between active and inactive compounds, which was also better than ROCS and EON procedures. Overall, the integration of fast computational techniques based on ligand and target structures resulted in a more efficient retrieval of cruzain inhibitors with desired pharmacological profiles that may be useful to advance the discovery of new trypanocidal agents.