997 resultados para Problem Resolution
Resumo:
Context. It is debated whether the Milky Way bulge has characteristics more similar to those of a classical bulge than those of a pseudobulge. Detailed abundance studies of bulge stars are important when investigating the origin, history, and classification of the bulge. These studies provide constraints on the star-formation history, initial mass function, and differences between stellar populations. Not many similar studies have been completed because of the large distance and high variable visual extinction along the line-of-sight towards the bulge. Therefore, near-IR investigations can provide superior results. Aims. To investigate the origin of the bulge and study its chemical abundances determined from near-IR spectra for bulge giants that have already been investigated with optical spectra. The optical spectra also provide the stellar parameters that are very important to the present study. In particular, the important CNO elements are determined more accurately in the near-IR. Oxygen and other alpha elements are important for investigating the star-formation history. The C and N abundances are important for determining the evolutionary stage of the giants and the origin of C in the bulge. Methods. High-resolution, near-infrared spectra in the H band were recorded using the CRIRES spectrometer mounted on the Very Large Telescope. The CNO abundances are determined from the numerous molecular lines in the wavelength range observed. Abundances of the alpha elements Si, S, and Ti are also determined from the near-IR spectra. Results. The abundance ratios [O/Fe], [Si/Fe], and [S/Fe] are enhanced to metallicities of at least [Fe/H] = -0.3, after which they decline. This suggests that the Milky Way bulge experienced a rapid and early burst of star formation similar to that of a classical bulge. However, a similarity between the bulge trend and the trend of the local thick disk seems to be present. This similarity suggests that the bulge could have had a pseudobulge origin. The C and N abundances suggest that our giants are first-ascent red-giants or clump stars, and that the measured oxygen abundances are those with which the stars were born. Our [C/Fe] trend does not show any increase with [Fe/H], which is expected if W-R stars contributed substantially to the C abundances. No ""cosmic scatter"" can be traced around our observed abundance trends: the measured scatter is expected, given the observational uncertainties.
Resumo:
We develop an automated spectral synthesis technique for the estimation of metallicities ([Fe/H]) and carbon abundances ([C/Fe]) for metal-poor stars, including carbon-enhanced metal-poor stars, for which other methods may prove insufficient. This technique, autoMOOG, is designed to operate on relatively strong features visible in even low- to medium-resolution spectra, yielding results comparable to much more telescope-intensive high-resolution studies. We validate this method by comparison with 913 stars which have existing high-resolution and low- to medium-resolution to medium-resolution spectra, and that cover a wide range of stellar parameters. We find that at low metallicities ([Fe/H] less than or similar to -2.0), we successfully recover both the metallicity and carbon abundance, where possible, with an accuracy of similar to 0.20 dex. At higher metallicities, due to issues of continuum placement in spectral normalization done prior to the running of autoMOOG, a general underestimate of the overall metallicity of a star is seen, although the carbon abundance is still successfully recovered. As a result, this method is only recommended for use on samples of stars of known sufficiently low metallicity. For these low- metallicity stars, however, autoMOOG performs much more consistently and quickly than similar, existing techniques, which should allow for analyses of large samples of metal-poor stars in the near future. Steps to improve and correct the continuum placement difficulties are being pursued.
Resumo:
Aims. An analytical solution for the discrepancy between observed core-like profiles and predicted cusp profiles in dark matter halos is studied. Methods. We calculate the distribution function for Navarro-Frenk-White halos and extract energy from the distribution, taking into account the effects of baryonic physics processes. Results. We show with a simple argument that we can reproduce the evolution of a cusp to a flat density profile by a decrease of the initial potential energy.
Resumo:
Identifying hadronic molecular states and/or hadrons with multiquark components either with or without exotic quantum numbers is a long-standing challenge in hadronic physics. We suggest that studying the production of these hadrons in relativistic heavy ion collisions offers a promising resolution to this problem as yields of exotic hadrons are expected to be strongly affected by their structures. Using the coalescence model for hadron production, we find that, compared to the case of a nonexotic hadron with normal quark numbers, the yield of an exotic hadron is typically an order of magnitude smaller when it is a compact multiquark state and a factor of 2 or more larger when it is a loosely bound hadronic molecule. We further find that some of the newly proposed heavy exotic states could be produced and realistically measured in these experiments.
Resumo:
The energy spectrum of an electron confined in a quantum dot (QD) with a three-dimensional anisotropic parabolic potential in a tilted magnetic field was found analytically. The theory describes exactly the mixing of in-plane and out-of-plane motions of an electron caused by a tilted magnetic field, which could be seen, for example, in the level anticrossing. For charged QDs in a tilted magnetic field we predict three strong resonant lines in the far-infrared-absorption spectra.
Resumo:
High-resolution synchrotron x-ray diffraction measurements were performed on single crystalline and powder samples of BiMn(2)O(5). A linear temperature dependence of the unit cell volume was found between T(N)=38 and 100 K, suggesting that a low-energy lattice excitation may be responsible for the lattice expansion in this temperature range. Between T(*)similar to 65 K and T(N), all lattice parameters showed incipient magnetoelastic effects, due to short-range spin correlations. An anisotropic strain along the a direction was also observed below T(*). Below T(N), a relatively large contraction of the a parameter following the square of the average sublattice magnetization of Mn was found, indicating that a second-order spin Hamiltonian accounts for the magnetic interactions along this direction. On the other hand, the more complex behaviors found for b and c suggest additional magnetic transitions below T(N) and perhaps higher-order terms in the spin Hamiltonian. Polycrystalline samples grown by distinct routes and with nearly homogeneous crystal structure above T(N) presented structural phase coexistence below T(N), indicating a close competition amongst distinct magnetostructural states in this compound.
Resumo:
An exciting unsolved problem in the study of high energy processes of early type stars concerns the physical mechanism for producing X-rays near the Be star gamma Cassiopeiae. By now we know that this source and several ""gamma Cas analogs"" exhibit an unusual hard thermal X-ray spectrum, compared both to normal massive stars and the non-thermal emission of known Be/X-ray binaries. Also, its light curve is variable on almost all conceivable timescales. In this study we reanalyze a high dispersion spectrum obtained by Chandra in 2001 and combine it with the analysis of a new (2004) spectrum and light curve obtained by XMM-Newton. We find that both spectra can be fit well with 3-4 optically thin, thermal components consisting of a hot component having a temperature kT(Q) similar to 12-14 keV, perhaps one with a value of similar to 2.4 keV, and two with well defined values near 0.6 keV and 0.11 keV. We argue that these components arise in discrete (almost monothermal) plasmas. Moreover, they cannot be produced within an integral gas structure or by the cooling of a dominant hot process. Consistent with earlier findings, we also find that the Fe abundance arising from K-shell ions is significantly subsolar and less than the Fe abundance from L-shell ions. We also find novel properties not present in the earlier Chandra spectrum, including a dramatic decrease in the local photoelectric absorption of soft X-rays, a decrease in the strength of the Fe and possibly of the Si K fluorescence features, underpredicted lines in two ions each of Ne and N (suggesting abundances that are similar to 1.5-3x and similar to 4x solar, respectively), and broadening of the strong NeXLy alpha and OVIII Ly alpha lines. In addition, we note certain traits in the gamma Cas spectrum that are different from those of the fairly well studied analog HD110432 - in this sense the stars have different ""personalities."" In particular, for gamma Cas the hot X-ray component remains nearly constant in temperature, and the photoelectric absorption of the X-ray plasmas can change dramatically. As found by previous investigators of gamma Cas, changes in flux, whether occurring slowly or in rapidly evolving flares, are only seldomly accompanied by variations in hardness. Moreover, the light curve can show a ""periodicity"" that is due to the presence of flux minima that recur semiregularly over a few hours, and which can appear again at different epochs.
Resumo:
Efficient automatic protein classification is of central importance in genomic annotation. As an independent way to check the reliability of the classification, we propose a statistical approach to test if two sets of protein domain sequences coming from two families of the Pfam database are significantly different. We model protein sequences as realizations of Variable Length Markov Chains (VLMC) and we use the context trees as a signature of each protein family. Our approach is based on a Kolmogorov-Smirnov-type goodness-of-fit test proposed by Balding et at. [Limit theorems for sequences of random trees (2008), DOI: 10.1007/s11749-008-0092-z]. The test statistic is a supremum over the space of trees of a function of the two samples; its computation grows, in principle, exponentially fast with the maximal number of nodes of the potential trees. We show how to transform this problem into a max-flow over a related graph which can be solved using a Ford-Fulkerson algorithm in polynomial time on that number. We apply the test to 10 randomly chosen protein domain families from the seed of Pfam-A database (high quality, manually curated families). The test shows that the distributions of context trees coming from different families are significantly different. We emphasize that this is a novel mathematical approach to validate the automatic clustering of sequences in any context. We also study the performance of the test via simulations on Galton-Watson related processes.
Resumo:
The width of a closed convex subset of n-dimensional Euclidean space is the distance between two parallel supporting hyperplanes. The Blaschke-Lebesgue problem consists of minimizing the volume in the class of convex sets of fixed constant width and is still open in dimension n >= 3. In this paper we describe a necessary condition that the minimizer of the Blaschke-Lebesgue must satisfy in dimension n = 3: we prove that the smooth components of the boundary of the minimizer have their smaller principal curvature constant and therefore are either spherical caps or pieces of tubes (canal surfaces).
Resumo:
The enzymatic kinetic resolution of tert-butyl 2-(1-hydroxyethyl) phenylcarbamate via lipase-catalyzed transesterification reaction was studied. We investigated several reaction conditions and the carbamate was resolved by Candida antarctica lipase B (CAL-B), leading to the optically pure (R)- and (S)-enantiomers. The enzymatic process showed excellent enantioselectivity (E > 200). (R)- and (S)-tert-butyl 2-(1-hydroxyethyl) phenylcarbamate were easily transformed into the corresponding (R)and (S)-1-(2-aminophenyl)ethanols.
Resumo:
Large scale enzymatic resolution of racemic sulcatol 2 has been useful for stereoselective biocatalysis. This reaction was fast and selective, using vinyl acetate as donor of acyl group and lipase from Candida antarctica (CALB) as catalyst. The large scale reaction (5.0 g, 39 mmol) afforded high optical purities for S-(+)-sulcatol 2 and R-(+)-sulcatyl acetate 3, i.e., ee > 99 per cent and good yields (45 per cent) within a short time (40 min). Thermodynamic parameters for the chemoesterification of sulcatol 2 by vinyl acetate were evaluated. The enthalpy and Gibbs free energy values of this reaction were negative, indicating that this process is exothermic and spontaneous which is in agreement with the reaction obtained enzymatically.
Resumo:
The first problem of the Seleucid mathematical cuneiform tablet BM 34 568 calculates the diagonal of a rectangle from its sides without resorting to the Pythagorean rule. For this reason, it has been a source of discussion among specialists ever since its first publication. but so far no consensus in relation to its mathematical meaning has been attained. This paper presents two new interpretations of the scribe`s procedure. based on the assumption that he was able to reduce the problem to a standard Mesopotamian question about reciprocal numbers. These new interpretations are then linked to interpretations of the Old Babylonian tablet Plimpton 322 and to the presence of Pythagorean triples in the contexts of Old Babylonian and Hellenistic mathematics. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
We consider a class of two-dimensional problems in classical linear elasticity for which material overlapping occurs in the absence of singularities. Of course, material overlapping is not physically realistic, and one possible way to prevent it uses a constrained minimization theory. In this theory, a minimization problem consists of minimizing the total potential energy of a linear elastic body subject to the constraint that the deformation field must be locally invertible. Here, we use an interior and an exterior penalty formulation of the minimization problem together with both a standard finite element method and classical nonlinear programming techniques to compute the minimizers. We compare both formulations by solving a plane problem numerically in the context of the constrained minimization theory. The problem has a closed-form solution, which is used to validate the numerical results. This solution is regular everywhere, including the boundary. In particular, we show numerical results which indicate that, for a fixed finite element mesh, the sequences of numerical solutions obtained with both the interior and the exterior penalty formulations converge to the same limit function as the penalization is enforced. This limit function yields an approximate deformation field to the plane problem that is locally invertible at all points in the domain. As the mesh is refined, this field converges to the exact solution of the plane problem.
Resumo:
This paper addresses the time-variant reliability analysis of structures with random resistance or random system parameters. It deals with the problem of a random load process crossing a random barrier level. The implications of approximating the arrival rate of the first overload by an ensemble-crossing rate are studied. The error involved in this so-called ""ensemble-crossing rate"" approximation is described in terms of load process and barrier distribution parameters, and in terms of the number of load cycles. Existing results are reviewed, and significant improvements involving load process bandwidth, mean-crossing frequency and time are presented. The paper shows that the ensemble-crossing rate approximation can be accurate enough for problems where load process variance is large in comparison to barrier variance, but especially when the number of load cycles is small. This includes important practical applications like random vibration due to impact loadings and earthquake loading. Two application examples are presented, one involving earthquake loading and one involving a frame structure subject to wind and snow loadings. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Electrical impedance tomography (EIT) captures images of internal features of a body. Electrodes are attached to the boundary of the body, low intensity alternating currents are applied, and the resulting electric potentials are measured. Then, based on the measurements, an estimation algorithm obtains the three-dimensional internal admittivity distribution that corresponds to the image. One of the main goals of medical EIT is to achieve high resolution and an accurate result at low computational cost. However, when the finite element method (FEM) is employed and the corresponding mesh is refined to increase resolution and accuracy, the computational cost increases substantially, especially in the estimation of absolute admittivity distributions. Therefore, we consider in this work a fast iterative solver for the forward problem, which was previously reported in the context of structural optimization. We propose several improvements to this solver to increase its performance in the EIT context. The solver is based on the recycling of approximate invariant subspaces, and it is applied to reduce the EIT computation time for a constant and high resolution finite element mesh. In addition, we consider a powerful preconditioner and provide a detailed pseudocode for the improved iterative solver. The numerical results show the effectiveness of our approach: the proposed algorithm is faster than the preconditioned conjugate gradient (CG) algorithm. The results also show that even on a standard PC without parallelization, a high mesh resolution (more than 150,000 degrees of freedom) can be used for image estimation at a relatively low computational cost. (C) 2010 Elsevier B.V. All rights reserved.