103 resultados para deduced optical model parameters
Resumo:
Data from 58 strong-lensing events surveyed by the Sloan Lens ACS Survey are used to estimate the projected galaxy mass inside their Einstein radii by two independent methods: stellar dynamics and strong gravitational lensing. We perform a joint analysis of these two estimates inside models with up to three degrees of freedom with respect to the lens density profile, stellar velocity anisotropy, and line-of-sight (LOS) external convergence, which incorporates the effect of the large-scale structure on strong lensing. A Bayesian analysis is employed to estimate the model parameters, evaluate their significance, and compare models. We find that the data favor Jaffe`s light profile over Hernquist`s, but that any particular choice between these two does not change the qualitative conclusions with respect to the features of the system that we investigate. The density profile is compatible with an isothermal, being sightly steeper and having an uncertainty in the logarithmic slope of the order of 5% in models that take into account a prior ignorance on anisotropy and external convergence. We identify a considerable degeneracy between the density profile slope and the anisotropy parameter, which largely increases the uncertainties in the estimates of these parameters, but we encounter no evidence in favor of an anisotropic velocity distribution on average for the whole sample. An LOS external convergence following a prior probability distribution given by cosmology has a small effect on the estimation of the lens density profile, but can increase the dispersion of its value by nearly 40%.
Resumo:
Based on our previous work, we investigate here the effects on the wind and magnetospheric structures of weak-lined T Tauri stars due to a misalignment between the axis of rotation of the star and its magnetic dipole moment vector. In such a configuration, the system loses the axisymmetry presented in the aligned case, requiring a fully three-dimensional (3D) approach. We perform 3D numerical magnetohydrodynamic simulations of stellar winds and study the effects caused by different model parameters, namely the misalignment angle theta(t), the stellar period of rotation, the plasma-beta, and the heating index.. Our simulations take into account the interplay between the wind and the stellar magnetic field during the time evolution. The system reaches a periodic behavior with the same rotational period of the star. We show that the magnetic field lines present an oscillatory pattern. Furthermore, we obtain that by increasing theta(t), the wind velocity increases, especially in the case of strong magnetic field and relatively rapid stellar rotation. Our 3D, time-dependent wind models allow us to study the interaction of a magnetized wind with a magnetized extrasolar planet. Such interaction gives rise to reconnection, generating electrons that propagate along the planet`s magnetic field lines and produce electron cyclotron radiation at radio wavelengths. The power released in the interaction depends on the planet`s magnetic field intensity, its orbital radius, and on the stellar wind local characteristics. We find that a close-in Jupiter-like planet orbiting at 0.05 AU presents a radio power that is similar to 5 orders of magnitude larger than the one observed in Jupiter, which suggests that the stellar wind from a young star has the potential to generate strong planetary radio emission that could be detected in the near future with LOFAR. This radio power varies according to the phase of rotation of the star. For three selected simulations, we find a variation of the radio power of a factor 1.3-3.7, depending on theta(t). Moreover, we extend the investigation done in Vidotto et al. and analyze whether winds from misaligned stellar magnetospheres could cause a significant effect on planetary migration. Compared to the aligned case, we show that the timescale tau(w) for an appreciable radial motion of the planet is shorter for larger misalignment angles. While for the aligned case tau(w) similar or equal to 100 Myr, for a stellar magnetosphere tilted by theta(t) = 30 degrees, tau(w) ranges from similar to 40 to 70 Myr for a planet located at a radius of 0.05 AU. Further reduction on tau(w) might occur for even larger misalignment angles and/or different wind parameters.
Resumo:
Evidence of jet precession in many galactic and extragalactic sources has been reported in the literature. Much of this evidence is based on studies of the kinematics of the jet knots, which depends on the correct identification of the components to determine their respective proper motions and position angles on the plane of the sky. Identification problems related to fitting procedures, as well as observations poorly sampled in time, may influence the follow-up of the components in time, which consequently might contribute to a misinterpretation of the data. In order to deal with these limitations, we introduce a very powerful statistical tool to analyse jet precession: the cross-entropy method for continuous multi-extremal optimization. Only based on the raw data of the jet components (right ascension and declination offsets from the core), the cross-entropy method searches for the precession model parameters that better represent the data. In this work we present a large number of tests to validate this technique, using synthetic precessing jets built from a given set of precession parameters. With the aim of recovering these parameters, we applied the cross-entropy method to our precession model, varying exhaustively the quantities associated with the method. Our results have shown that even in the most challenging tests, the cross-entropy method was able to find the correct parameters within a 1 per cent level. Even for a non-precessing jet, our optimization method could point out successfully the lack of precession.
Resumo:
For the first time, we introduce a class of transformed symmetric models to extend the Box and Cox models to more general symmetric models. The new class of models includes all symmetric continuous distributions with a possible non-linear structure for the mean and enables the fitting of a wide range of models to several data types. The proposed methods offer more flexible alternatives to Box-Cox or other existing procedures. We derive a very simple iterative process for fitting these models by maximum likelihood, whereas a direct unconditional maximization would be more difficult. We give simple formulae to estimate the parameter that indexes the transformation of the response variable and the moments of the original dependent variable which generalize previous published results. We discuss inference on the model parameters. The usefulness of the new class of models is illustrated in one application to a real dataset.
Resumo:
Elastic scattering angular distributions of (16)O + (12)C in the center of mass energy range from 8.55 MeV to 56.57 MeV have been analyzed considering the effect of the exchange of an alpha particle between projectile and target leading to the same nuclei of the entrance channel (elastic-transfer). An alpha particle spectroscopic factor for the ground state of the (16)O was determined. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
We have measured the elastic scattering cross-section for (8)Li + (9)Be and (8)Li + (51)V systems at 19.6 MeV and 18.5 MeV, respectively. We have also extracted total reaction cross sections from the elastic scattering analysis for several light weakly bound systems using the optical model with Woods-Saxon and double-folding-type potentials. Different reduction methods for the total reaction cross-sections have been applied to analyze and compare simultaneously all the systems.
Resumo:
In this paper we present a novel approach for multispectral image contextual classification by combining iterative combinatorial optimization algorithms. The pixel-wise decision rule is defined using a Bayesian approach to combine two MRF models: a Gaussian Markov Random Field (GMRF) for the observations (likelihood) and a Potts model for the a priori knowledge, to regularize the solution in the presence of noisy data. Hence, the classification problem is stated according to a Maximum a Posteriori (MAP) framework. In order to approximate the MAP solution we apply several combinatorial optimization methods using multiple simultaneous initializations, making the solution less sensitive to the initial conditions and reducing both computational cost and time in comparison to Simulated Annealing, often unfeasible in many real image processing applications. Markov Random Field model parameters are estimated by Maximum Pseudo-Likelihood (MPL) approach, avoiding manual adjustments in the choice of the regularization parameters. Asymptotic evaluations assess the accuracy of the proposed parameter estimation procedure. To test and evaluate the proposed classification method, we adopt metrics for quantitative performance assessment (Cohen`s Kappa coefficient), allowing a robust and accurate statistical analysis. The obtained results clearly show that combining sub-optimal contextual algorithms significantly improves the classification performance, indicating the effectiveness of the proposed methodology. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The modeling and analysis of lifetime data is an important aspect of statistical work in a wide variety of scientific and technological fields. Good (1953) introduced a probability distribution which is commonly used in the analysis of lifetime data. For the first time, based on this distribution, we propose the so-called exponentiated generalized inverse Gaussian distribution, which extends the exponentiated standard gamma distribution (Nadarajah and Kotz, 2006). Various structural properties of the new distribution are derived, including expansions for its moments, moment generating function, moments of the order statistics, and so forth. We discuss maximum likelihood estimation of the model parameters. The usefulness of the new model is illustrated by means of a real data set. (c) 2010 Elsevier B.V. All rights reserved.
Resumo:
The Laplace distribution is one of the earliest distributions in probability theory. For the first time, based on this distribution, we propose the so-called beta Laplace distribution, which extends the Laplace distribution. Various structural properties of the new distribution are derived, including expansions for its moments, moment generating function, moments of the order statistics, and so forth. We discuss maximum likelihood estimation of the model parameters and derive the observed information matrix. The usefulness of the new model is illustrated by means of a real data set. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
This work is aimed at studying the adsorption mechanism of short chain 20-mer pyrimidinic homoss-DNA (oligodeoxyribonucleotide, ODN: polyC(20) and polyT(20)) onto CNT by reflectometry. To analyze the experimental data, the effective-medium theory using the Bruggemann approximation represents a Suitable optical model to account for the surface properties (roughness, thickness, and optical constants) and the size of the adsorbate. Systematic information about the involved interactions is obtained by changing the physicochemical properties of the system. Hydrophobic and electrostatic interactions are evaluated by comparing the adsorption oil hydrophobic CNT and oil hydrophilic silica and by Modulating the ionic Strength With and without Mg(2+). The ODN adsorption process oil CNT is driven by hydrophobic interactions only when the electrostatic repulsion is Suppressed. The adsorption mode results in ODN molecules in a side-on orientation with the bases (nonpolar region) toward the surface. This unfavorable orientation is partially reverse by adding Mg(2+). On the other hand, the adsorption oil silica is dominated by the strong repulsive electrostatic interaction that is screened at high ionic strength or mediated by Mg(2+). The cation-mediated process induces the interaction of the phosphate backbone (polar region) with the surface, leaving the bases free for hybridization. Although the general adsorption behavior of the pyrimidine bases is the same, polyC(20) presents higher affinity for the CNT Surface due to its acid-base properties.
Resumo:
Depolymerization of cellulose in homogeneous acidic medium is analyzed on the basis of autocatalytic model of hydrolysis with a positive feedback of acid production from the degraded biopolymer. The normalized number of scissions per cellulose chain, S(t)/nA degrees A = 1 - C(t)/C(0), follows a sigmoid behavior with reaction time t, and the cellulose concentration C(t) decreases exponentially with a linear and cubic time dependence, C(t) = C(0)exp[-at - bt (3)], where a and b are model parameters easier determined from data analysis.
Resumo:
The efficacy of photodynamic therapy (PDT) depends on a variety of parameters: concentration of the photosensitizer at the time of treatment, light wavelength, fluence, fluence rate, availability of oxygen within the illuminated volume, and light distribution in the tissue. Dosimetry in PDT requires the congregation of adequate amounts of light, drug, and tissue oxygen. The adequate dosimetry should be able to predict the extension of the tissue damage. Photosensitizer photobleaching rate depends on the availability of molecular oxygen in the tissue. Based on photosensitizers photobleaching models, high photobleaching has to be associated with high production of singlet oxygen and therefore with higher photodynamic action, resulting in a greater depth of necrosis. The purpose of this work is to show a possible correlation between depth of necrosis and the in vivo photosensitizer (in this case, Photogem (R)) photodegradation during PDT. Such correlation allows possibilities for the development of a real time evaluation of the photodynamic action during PDT application. Experiments were performed in a range of fluence (0-450 J/cm(2)) at a constant fluence rate of 250 mW/cm(2) and applying different illumination times (0-1800 s) to achieve the desired fluence. A quantity was defined (psi) as the product of fluorescence ratio (related to the photosensitizer degradation at the surface) and the observed depth of necrosis. The correlation between depth of necrosis and surface fluorescence signal is expressed in psi and could allow, in principle, a noninvasive monitoring of PDT effects during treatment. High degree of correlation is observed and a simple mathematical model to justify the results is presented.
Resumo:
The reverse engineering problem addressed in the present research consists of estimating the thicknesses and the optical constants of two thin films deposited on a transparent substrate using only transmittance data through the whole stack. No functional dispersion relation assumptions are made on the complex refractive index. Instead, minimal physical constraints are employed, as in previous works of some of the authors where only one film was considered in the retrieval algorithm. To our knowledge this is the first report on the retrieval of the optical constants and the thickness of multiple film structures using only transmittance data that does not make use of dispersion relations. The same methodology may be used if the available data correspond to normal reflectance. The software used in this work is freely available through the PUMA Project web page (http://www.ime.usp.br/similar to egbirgin/puma/). (C) 2008 Optical Society of America
Resumo:
Inductively coupled plasma optical emission spectrometers (ICP DES) allow fast simultaneous measurements of several spectral lines for multiple elements. The combination of signal intensities of two or more emission lines for each element may bring such advantages as improvement of the precision, the minimization of systematic errors caused by spectral interferences and matrix effects. In this work, signal intensities for several spectral lines were combined for the determination of Al, Cd, Co, Cr, Mn, Pb, and Zn in water. Afterwards, parameters for evaluation of the calibration model were calculated to select the combination of emission lines leading to the best accuracy (lowest values of PRESS-Predicted error sum of squares and RMSEP-Root means square error of prediction). Limits of detection (LOD) obtained using multiple lines were 7.1, 0.5, 4.4, 0.042, 3.3, 28 and 6.7 mu g L(-1) (n = 10) for Al, Cd. Co, Cr, Mn, Pb and Zn, respectively, in the presence of concomitants. On the other hand, the LOD established for the most intense emission line were 16. 0.7, 8.4, 0.074. 23, 26 and 9.6 mu g L(-1) (n = 10) for these same elements in the presence of concomitants. The accuracy of the developed procedure was demonstrated using water certified reference material. The use of multiple lines improved the sensitivity making feasible the determination of these analytes according to the target values required for the current environmental legislation for water samples and it was also demonstrated that measurements in multiple lines can also be employed as a tool to verify the accuracy of an analytical procedure in ICP DES. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The chronic mild stress (CMS) model has been used as an animal model of depression which induces anhedonic behavior in rodents. The present study was aimed to evaluate the behavioral and physiological effects of administration of P-carboline harmine in rats exposed to CMS Procedure. To this aim, after 40 days of exposure to CMS procedure, rats were treated with harmine (15 mg/kg/day) for 7 days. In this study, sweet food consumption, adrenal gland weight, adrenocorticotrophin hormone (ACTH) levels, and hippocampal brain-derived-neurotrophic factor (BDNF) protein levels were assessed. Our findings demonstrated that chronic stressful situations induced anhedonia, hypertrophy of adrenal gland weight, increase ACTH circulating levels in rats and increase BDNF protein levels. Interestingly, treatment with harmine reversed anhedonia, the increase of adrenal gland weight, normalized ACTH circulating levels and BDNF protein levels. Finally, these findings further support the hypothesis that harmine could be a new pharmacological tool for the treatment of depression. (C) 2009 Elsevier Inc. All rights reserved.