7 resultados para gap, minproblem, algoritmi, esatti, lower, bound, posta

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a new Skyrme-like model with fields taking values on the sphere S3 or, equivalently, on the group SU(2). The action of the model contains a quadratic kinetic term plus a quartic term which is the same as that of the Skyrme-Faddeev model. The novelty of the model is that it possess a first order Bogomolny type equation whose solutions automatically satisfy the second order Euler-Lagrange equations. It also possesses a lower bound on the static energy which is saturated by the Bogomolny solutions. Such Bogomolny equation is equivalent to the so-called force free equation used in plasma and solar Physics, and which possesses large classes of solutions. An old result due to Chandrasekhar prevents the existence of finite energy solutions for the force free equation on the entire three- dimensional space R3. We construct new exact finite energy solutions to the Bogomolny equations for the case where the space is the three-sphere S3, using toroidal like coordinates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Heavy-flavor production in p + p collisions is a good test of perturbative-quantum-chromodynamics (pQCD) calculations. Modification of heavy-flavor production in heavy-ion collisions relative to binary-collision scaling from p + p results, quantified with the nuclear-modification factor (R-AA), provides information on both cold-and hot-nuclear-matter effects. Midrapidity heavy-flavor R-AA measurements at the Relativistic Heavy Ion Collider have challenged parton-energy-loss models and resulted in upper limits on the viscosity-entropy ratio that are near the quantum lower bound. Such measurements have not been made in the forward-rapidity region. Purpose: Determine transverse-momentum (p(T)) spectra and the corresponding R-AA for muons from heavy-flavor meson decay in p + p and Cu + Cu collisions at root s(NN) = 200 GeV and y = 1.65. Method: Results are obtained using the semileptonic decay of heavy-flavor mesons into negative muons. The PHENIX muon-arm spectrometers measure the p(T) spectra of inclusive muon candidates. Backgrounds, primarily due to light hadrons, are determined with a Monte Carlo calculation using a set of input hadron distributions tuned to match measured-hadron distributions in the same detector and statistically subtracted. Results: The charm-production cross section in p + p collisions at root s = 200 GeV, integrated over p(T) and in the rapidity range 1.4 < y < 1.9, is found to be d(sigma e (e) over bar)/dy = 0.139 +/- 0.029 (stat)(-0.058)(+0.051) (syst) mb. This result is consistent with a perturbative fixed-order-plus-next-to-leading-log calculation within scale uncertainties and is also consistent with expectations based on the corresponding midrapidity charm-production cross section measured by PHENIX. The R-AA for heavy-flavor muons in Cu + Cu collisions is measured in three centrality bins for 1 < p(T) < 4 GeV/c. Suppression relative to binary-collision scaling (R-AA < 1) increases with centrality. Conclusions: Within experimental and theoretical uncertainties, the measured charm yield in p + p collisions is consistent with state-of-the-art pQCD calculations. Suppression in central Cu + Cu collisions suggests the presence of significant cold-nuclear-matter effects and final-state energy loss.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report the detection of CoRoT-23b, a hot Jupiter transiting in front of its host star with a period of 3.6314 +/- 0.0001 days. This planet was discovered thanks to photometric data secured with the CoRoT satellite, combined with spectroscopic radial velocity (RV) measurements. A photometric search for possible background eclipsing binaries conducted at CFHT and OGS concluded with a very low risk of false positives. The usual techniques of combining RV and transit data simultaneously were used to derive stellar and planetary parameters. The planet has a mass of M-p = 2.8 +/- 0.3 M-Jup, a radius of R-pl = 1.05 +/- 0.13 R-Jup, a density of approximate to 3 gcm(-3). RV data also clearly reveal a nonzero eccentricity of e = 0.16 +/- 0.02. The planet orbits a mature G0 main sequence star of V = 15.5 mag, with a mass M-star = 1.14 +/- 0.08 M-circle dot, a radius R-star = 1. 61 +/- 0.18 R-circle dot and quasi-solar abundances. The age of the system is evaluated to be 7 Gyr, not far from the transition to subgiant, in agreement with the rather large stellar radius. The two features of a significant eccentricity of the orbit and of a fairly high density are fairly uncommon for a hot Jupiter. The high density is, however, consistent with a model of contraction of a planet at this mass, given the age of the system. On the other hand, at such an age, circularization is expected to be completed. In fact, we show that for this planetary mass and orbital distance, any initial eccentricity should not totally vanish after 7 Gyr, as long as the tidal quality factor Q(p) is more than a few 10(5), a value that is the lower bound of the usually expected range. Even if CoRoT-23b features a density and an eccentricity that are atypical of a hot Jupiter, it is thus not an enigmatic object.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work aimed to apply genetic algorithms (GA) and particle swarm optimization (PSO) in cash balance management using Miller-Orr model, which consists in a stochastic model that does not define a single ideal point for cash balance, but an oscillation range between a lower bound, an ideal balance and an upper bound. Thus, this paper proposes the application of GA and PSO to minimize the Total Cost of cash maintenance, obtaining the parameter of the lower bound of the Miller-Orr model, using for this the assumptions presented in literature. Computational experiments were applied in the development and validation of the models. The results indicated that both the GA and PSO are applicable in determining the cash level from the lower limit, with best results of PSO model, which had not yet been applied in this type of problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While dozens of classification algorithms have been applied to time series, recent empirical evidence strongly suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm is important, and depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping, and cardiology data requires invariance to the baseline (the mean value). Similarly, recent work suggests that for time series clustering, the choice of clustering algorithm is much less important than the choice of distance measure used.In this work we make a somewhat surprising claim. There is an invariance that the community seems to have missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where some complex objects may be incorrectly assigned to a simpler class. Similarly, for clustering this effect can introduce errors by “suggesting” to the clustering algorithm that subjectively similar, but complex objects belong in a sparser and larger diameter cluster than is truly warranted.We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification and clustering accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series mining experiments ever attempted in a single work, and show that complexity-invariant distance measures can produce improvements in classification and clustering in the vast majority of cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: The aim of this study was to compare the correspondence between gap formation and apical microleakage in root canals filled with epoxy resin-based (AH Plus) combined or not with resinous primer or with a dimethacrylate-based root canal sealer (Epiphany). Material and Methods: Thirty-nine lower single-rooted human premolars were filled by the lateral condensation technique (LC) and immersed in a 50-wt% aqueous silver nitrate solution at 37 degrees C (24 h). After longitudinal sectioning, epoxy resin replicas were made from the tooth specimens. Both the replicas and the specimens were prepared for scanning electron microscopy (SEM). The gaps were observed in the replicas. Apical microleakage was detected in the specimens by SEM/energy dispersive spectroscopy (SEM/EDS). The data were analyzed statistically using an Ordinal Logistic Regression model and Analysis of Correspondence (alpha=0.05). Results: Epiphany presented more regions containing gaps between dentin and sealer (p<0.05). There was correspondence between the presence of gaps and microleakage (p<0.05). Microleakage was similar among the root-filling materials (p>0.05). Conclusions: The resinous primer did not improve the sealing ability of AH Plus sealer and the presence of gaps had an effect on apical microleakage for all materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The competition between confinement potential fluctuations and band-gap renormalization (BGR) in GaAs/AlxGa1-xAs quantum wells grown on [1 0 0] and [3 1 1]A GaAs substrates is evaluated. The results clearly demonstrate the coexistence of the band-tail states filling related to potential fluctuations and the band-gap renormalization caused by an increase in the density of photogenerated carriers during the photoluminescence (PL) experiments. Both phenomena have strong influence on temperature dependence of the PL-peak energy (E-PL(T)). As the photon density increases, the E-PL can shift to either higher or lower energies, depending on the sample temperature. The temperature at which the displacement changes from a blueshift to a redshift is governed by the magnitude of the potential fluctuations and by the variation of BGR with excitation density. A simple band-tail model with a Gaussian-like distribution of the density of state was used to describe the competition between the band-tail filling and the BGR effects on E-PL(T). (C) 2012 Elsevier B.V. All rights reserved.