47 resultados para restriction of parameter space
Resumo:
Support vector machines (SVMs) were originally formulated for the solution of binary classification problems. In multiclass problems, a decomposition approach is often employed, in which the multiclass problem is divided into multiple binary subproblems, whose results are combined. Generally, the performance of SVM classifiers is affected by the selection of values for their parameters. This paper investigates the use of genetic algorithms (GAs) to tune the parameters of the binary SVMs in common multiclass decompositions. The developed GA may search for a set of parameter values common to all binary classifiers or for differentiated values for each binary classifier. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
In a 2D parameter space, by using nine experimental time series of a Clitia`s circuit, we characterized three codimension-1 chaotic fibers parallel to a period-3 window. To show the local preservation of the properties of the chaotic attractors in each fiber, we applied the closed return technique and two distinct topological methods. With the first topological method we calculated the linking, numbers in the sets of unstable periodic orbits, and with the second one we obtained the symbolic planes and the topological entropies by applying symbolic dynamic analysis. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Models of dynamical dark energy unavoidably possess fluctuations in the energy density and pressure of that new component. In this paper we estimate the impact of dark energy fluctuations on the number of galaxy clusters in the Universe using a generalization of the spherical collapse model and the Press-Schechter formalism. The observations we consider are several hypothetical Sunyaev-Zel`dovich and weak lensing (shear maps) cluster surveys, with limiting masses similar to ongoing (SPT, DES) as well as future (LSST, Euclid) surveys. Our statistical analysis is performed in a 7-dimensional cosmological parameter space using the Fisher matrix method. We find that, in some scenarios, the impact of these fluctuations is large enough that their effect could already be detected by existing instruments such as the South Pole Telescope, when priors from other standard cosmological probes are included. We also show how dark energy fluctuations can be a nuisance for constraining cosmological parameters with cluster counts, and point to a degeneracy between the parameter that describes dark energy pressure on small scales (the effective sound speed) and the parameters describing its equation of state.
Resumo:
We review some issues related to the implications of different missing data mechanisms on statistical inference for contingency tables and consider simulation studies to compare the results obtained under such models to those where the units with missing data are disregarded. We confirm that although, in general, analyses under the correct missing at random and missing completely at random models are more efficient even for small sample sizes, there are exceptions where they may not improve the results obtained by ignoring the partially classified data. We show that under the missing not at random (MNAR) model, estimates on the boundary of the parameter space as well as lack of identifiability of the parameters of saturated models may be associated with undesirable asymptotic properties of maximum likelihood estimators and likelihood ratio tests; even in standard cases the bias of the estimators may be low only for very large samples. We also show that the probability of a boundary solution obtained under the correct MNAR model may be large even for large samples and that, consequently, we may not always conclude that a MNAR model is misspecified because the estimate is on the boundary of the parameter space.
Resumo:
LetQ(4)( c) be a four-dimensional space form of constant curvature c. In this paper we show that the infimum of the absolute value of the Gauss-Kronecker curvature of a complete minimal hypersurface in Q(4)(c), c <= 0, whose Ricci curvature is bounded from below, is equal to zero. Further, we study the connected minimal hypersurfaces M(3) of a space form Q(4)( c) with constant Gauss-Kronecker curvature K. For the case c <= 0, we prove, by a local argument, that if K is constant, then K must be equal to zero. We also present a classification of complete minimal hypersurfaces of Q(4)( c) with K constant.
Resumo:
Let G = Z/a x(mu) (Z/b x TL(2)(F(p))) and X(n) be an n-dimensional CW-complex with the homotopy type of the n-sphere. We determine the automorphism group Aut(G) and then compute the number of distinct homotopy types of spherical space forms with respect to free and cellular G-actions on all CW-complexes X(2dn - 1), where 2d is a period of G. Next, the group E(X(2dn - 1)/alpha) of homotopy self-equivalences of spherical space forms X(2dn - 1)/alpha, associated with such G-actions alpha on X(2dn - 1) are studied. Similar results for the rest of finite periodic groups have been obtained recently and they are described in the introduction. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS-DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of 884,126 SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: 14 <= r <= 21 (85.2%) and r >= 19 (82.1%). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT, and Ball et al. We find that our FT classifier is comparable to or better in completeness over the full magnitude range 15 <= r <= 21, with much lower contamination than all but the Ball et al. classifier. At the faintest magnitudes (r > 19), our classifier is the only one that maintains high completeness (> 80%) while simultaneously achieving low contamination (similar to 2.5%). We also examine the SDSS parametric classifier (psfMag - modelMag) to see if the dividing line between stars and galaxies can be adjusted to improve the classifier. We find that currently stars in close pairs are often misclassified as galaxies, and suggest a new cut to improve the classifier. Finally, we apply our FT classifier to separate stars from galaxies in the full set of 69,545,326 SDSS photometric objects in the magnitude range 14 <= r <= 21.
Resumo:
The study of displaced vertices containing two b-jets may provide a double discovery at the Large Hadron Collider (LHC): we show how it may not only reveal evidence for supersymmetry, but also provide a way to uncover the Higgs boson necessary in the formulation of the electroweak theory in a large region of the parameter space. We quantify this explicitly using the simplest minimal supergravity model with bilinear breaking of R-parity, which accounts for the observed pattern of neutrino masses and mixings seen in neutrino oscillation experiments.
Resumo:
We investigate a neutrino mass model in which the neutrino data is accounted for by bilinear R-parity violating supersymmetry with anomaly mediated supersymmetry breaking. We focus on the CERN Large Hadron Collider (LHC) phenomenology, studying the reach of generic supersymmetry search channels with leptons, missing energy and jets. A special feature of this model is the existence of long-lived neutralinos and charginos which decay inside the detector leading to detached vertices. We demonstrate that the largest reach is obtained in the displaced vertices channel and that practically all of the reasonable parameter space will be covered with an integrated luminosity of 10 fb(-1). We also compare the displaced vertex reaches of the LHC and Tevatron.
Resumo:
We examine the possibility that a new strong interaction is accessible to the Tevatron and the LHC. In an effective theory approach, we consider a scenario with a new color-octet interaction with strong couplings to the top quark, as well as the presence of a strongly coupled fourth generation which could be responsible for electroweak symmetry breaking. We apply several constraints, including the ones from flavor physics. We study the phenomenology of the resulting parameter space at the Tevatron, focusing on the forward-backward asymmetry in top pair production, as well as in the production of the fourth-generation quarks. We show that if the excess in the top production asymmetry is indeed the result of this new interaction, the Tevatron could see the first hints of the strongly coupled fourth-generation quarks. Finally, we show that the LHC with root s = 7 TeV and 1 fb(-1) integrated luminosity should observe the production of fourth-generation quarks at a level at least 1 order of magnitude above the QCD prediction for the production of these states.
Resumo:
We use the recent results on dark matter searches of the 22-string IceCube detector to probe the remaining allowed window for strongly interacting dark matter in the mass range 10(4) < m(X) < 10(15) GeV. We calculate the expected signal in the 22-string IceCube detector from the annihilation of such particles captured in the Sun and compare it to the detected background. As a result, the remaining allowed region in the mass versus cross section parameter space is ruled out. We also show the expected sensitivity of the complete IceCube detector with 86 strings.
Resumo:
This work deals with the problem of minimizing the waste of space that occurs on a rotational placement of a set of irregular bi-dimensional items inside a bi-dimensional container. This problem is approached with a heuristic based on Simulated Annealing (SA) with adaptive neighborhood. The objective function is evaluated in a constructive approach, where the items are placed sequentially. The placement is governed by three different types of parameters: sequence of placement, the rotation angle and the translation. The rotation applied and the translation of the polygon are cyclic continuous parameters, and the sequence of placement defines a combinatorial problem. This way, it is necessary to control cyclic continuous and discrete parameters. The approaches described in the literature deal with only type of parameter (sequence of placement or translation). In the proposed SA algorithm, the sensibility of each continuous parameter is evaluated at each iteration increasing the number of accepted solutions. The sensibility of each parameter is associated to its probability distribution in the definition of the next candidate.
Resumo:
A computational study of the isomers of tetrafluorinated [2.2]cyclophanes persubstituted in one ring, namely F-4-[2.2]paracyclophane (4), F-4-anti-[2.2]metacyclophane (5a), F-4-syn-[2.2]metacyclophane (5b), and F-4-[2.2]metaparacyclophane (6a and 6b), was carried out. The effects of fluorination on the geometries, relative energies, local and global aromaticity, and strain energies of the bridges and rings were investigated. An analysis of the electron density by B3PW91/6-31+G(d,p), B3LYP/6-31+G(d,p), and MP2/6-31+G(d,p) was carried out using the natural bond orbitals (NBO), natural steric analysis (NSA), and atoms in molecules (AIM) methods. The analysis of frontier molecular orbitals (MOs) was also employed. The results indicated that the molecular structure of [2.2]paracyclophane is the most affected by the fluorination. Isodesmic reactions showed that the fluorinated rings are more strained than the nonfluorinated ones. The NICS, HOMA, and PDI criteria evidenced that the fluorination affects the aromaticity of both the fluorinated and the nonfluorinated rings. The NBO and NSA analyses gave an indication that the fluorination increases not only the number of through-space interactions but also their magnitude. The AIM analysis suggested that the through-space interactions are restricted to the F-4-[2.2]metacyclophanes. In addition, the atomic properties, computed over the atomic basins, shave evidence that not only the substitution, but also the position of the bridges could affect the atomic charges. the first atomic moments, and the atomic volumes.
Resumo:
Purpose: Because of the controversial biologic tolerance and management, retained intraorbital metallic foreign body (RIMFb) poses a formidable challenge to surgeons. Besides location of the foreign body, indications for surgical management include neurologic injury, mechanical restriction of the eye movement, and development of local infection or draining fistula. The authors describe an unusual case of spontaneous migration of a RIMFb. Methods: A 26-year-old man had a gunshot injury on the left orbit. The patient was initially managed conservatively because of the posterior position of the bullet fragment. Thereafter, because of the clinical impairments and anterior migration of projectile, surgical treatment was considered. Results: Spontaneous anterior migration has led to mechanical disturbances and inflammatory complications that comprise explicit surgical indications for removal. The patient underwent surgery with complete relief of symptoms. We suppose that extrinsic ocular muscles might play a role in shifting large RIMFb over time, leading to change in the management strategies. Conclusions: Spontaneous migration of RIMFb is a rare clinical situation that can lead to pain, local deformity, as well as changes in the management strategies of the affected patients even in the late phase of follow-up.
Resumo:
Background and objective The time course of cardiopulmonary alterations after pulmonary embolism has not been clearly demonstrated and nor has the role of systemic inflammation on the pathogenesis of the disease. This study aimed to evaluate over 12 h the effects of pulmonary embolism caused by polystyrene microspheres on the haemodynamics, lung mechanics and gas exchange and on interleukin-6 production. Methods Ten large white pigs (weight 35-42 kg) had arterial and pulmonary catheters inserted and pulmonary embolism was induced in five pigs by injection of polystyrene microspheres (diameter similar to 300 mu mol l(-1)) until a value of pulmonary mean arterial pressure of twice the baseline was obtained. Five other animals received only saline. Haemodynamic and respiratory data and pressure-volume curves of the respiratory system were collected. A bronchoscopy was performed before and 12 h after embolism, when the animals were euthanized. Results The embolism group developed hypoxaemia that was not corrected with high oxygen fractions, as well as higher values of dead space, airway resistance and lower respiratory compliance levels. Acute haemodynamic alterations included pulmonary arterial hypertension with preserved systemic arterial pressure and cardiac index. These derangements persisted until the end of the experiments. The plasma interleukin-6 concentrations were similar in both groups; however, an increase in core temperature and a nonsignificant higher concentration of bronchoalveolar lavage proteins were found in the embolism group. Conclusion Acute pulmonary embolism induced by polystyrene microspheres in pigs produces a 12-h lasting hypoxaemia and a high dead space associated with high airway resistance and low compliance. There were no plasma systemic markers of inflammation, but a higher central temperature and a trend towards higher bronchoalveolar lavage proteins were found. Eur J Anaesthesiol 27:67-76 (C) 2010 European Society of Anaesthesiology.