996 resultados para variables search
Resumo:
This paper describes an experimental design technique, known as variables search, developed to expose the critical variables and screen out the irrelevant ones. It is easy to learn and use and clearly dissociates the main and interactions effects from each other. An example of air separation process by pressure swing adsorption was used to demonstrate how the variables search technique works. The phases of identification of the critical variables is shown, step by step,.
Resumo:
Pós-graduação em Design - FAAC
Resumo:
We present a method for the static resource usage analysis of MiniZinc models. The analysis can infer upper bounds on the usage that a MiniZinc model will make of some resources such as the number of constraints of a given type (equality, disequality, global constraints, etc.), the number of variables (search variables or temporary variables), or the size of the expressions before calling the solver. These bounds are obtained from the models independently of the concrete input data (the instance data) and are in general functions of sizes of such data. In our approach, MiniZinc models are translated into Ciao programs which are then analysed by the CiaoPP system. CiaoPP includes a parametric analysis framework for resource usage in which the user can define resources and express the resource usage of library procedures (and certain program construets) by means of a language of assertions. We present the approach and report on a preliminary implementation, which shows the feasibility of the approach, and provides encouraging results.
Resumo:
This paper proposes strategies to reduce the number of variables and the combinatorial search space of the multistage transmission expansion planning problem (TEP). The concept of the binary numeral system (BNS) is used to reduce the number of binary and continuous variables related to the candidate transmission lines and network constraints that are connected with them. The construction phase of greedy randomized adaptive search procedure (GRASP-CP) and additional constraints, obtained from power flow equilibrium in an electric power system are employed for more reduction in search space. The multistage TEP problem is modeled like a mixed binary linear programming problem and solved using a commercial solver with a low computational time. The results of one test system and two real systems are presented in order to show the efficiency of the proposed solution technique. © 1969-2012 IEEE.
Resumo:
An inclusive search is presented for new heavy particle pairs produced in √s=7 TeV proton-proton collisions at the LHC using 4.7±0.1 fb -1 of integrated luminosity. The selected events are analyzed in the 2D razor space of MR, an event-by-event indicator of the heavy particle mass scale, and R, a dimensionless variable related to the missing transverse energy. The third-generation sector is probed using the event heavy-flavor content. The search is sensitive to generic supersymmetry models with minimal assumptions about the superpartner decay chains. No excess is observed in the number of events beyond that predicted by the standard model. Exclusion limits are derived in the CMSSM framework as well as for simplified models. Within the CMSSM parameter space considered, gluino masses up to 800 GeV and squark masses up to 1.35 TeV are excluded at 95% confidence level depending on the model parameters. The direct production of pairs of top or bottom squarks is excluded for masses as high as 400 GeV. © 2013 CERN.
Resumo:
An inclusive search for supersymmetric processes that produce final states with jets and missing transverse energy is performed in pp collisions at a centre-of-mass energy of 8 TeV. The data sample corresponds to an integrated luminosity of 11.7 fb-1 collected by the CMS experiment at the LHC. In this search, a dimensionless kinematic variable, αT, is used to discriminate between events with genuine and misreconstructed missing transverse energy. The search is based on an examination of the number of reconstructed jets per event, the scalar sum of transverse energies of these jets, and the number of these jets identified as originating from bottom quarks. No significant excess of events over the standard model expectation is found. Exclusion limits are set in the parameter space of simplified models, with a special emphasis on both compressed-spectrum scenarios and direct or gluino-induced production of third-generation squarks. For the case of gluino-mediated squark production, gluino masses up to 950-1125 GeV are excluded depending on the assumed model. For the direct pair-production of squarks, masses up to 450 GeV are excluded for a single light first- or second-generation squark, increasing to 600 GeV for bottom squarks. © 2013 CERN for the benefit of the CMS collaboration.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
In the first part of this thesis we search for beyond the Standard Model physics through the search for anomalous production of the Higgs boson using the razor kinematic variables. We search for anomalous Higgs boson production using proton-proton collisions at center of mass energy √s=8 TeV collected by the Compact Muon Solenoid experiment at the Large Hadron Collider corresponding to an integrated luminosity of 19.8 fb-1.
In the second part we present a novel method for using a quantum annealer to train a classifier to recognize events containing a Higgs boson decaying to two photons. We train that classifier using simulated proton-proton collisions at √s=8 TeV producing either a Standard Model Higgs boson decaying to two photons or a non-resonant Standard Model process that produces a two photon final state.
The production mechanisms of the Higgs boson are precisely predicted by the Standard Model based on its association with the mechanism of electroweak symmetry breaking. We measure the yield of Higgs bosons decaying to two photons in kinematic regions predicted to have very little contribution from a Standard Model Higgs boson and search for an excess of events, which would be evidence of either non-standard production or non-standard properties of the Higgs boson. We divide the events into disjoint categories based on kinematic properties and the presence of additional b-quarks produced in the collisions. In each of these disjoint categories, we use the razor kinematic variables to characterize events with topological configurations incompatible with typical configurations found from standard model production of the Higgs boson.
We observe an excess of events with di-photon invariant mass compatible with the Higgs boson mass and localized in a small region of the razor plane. We observe 5 events with a predicted background of 0.54 ± 0.28, which observation has a p-value of 10-3 and a local significance of 3.35σ. This background prediction comes from 0.48 predicted non-resonant background events and 0.07 predicted SM higgs boson events. We proceed to investigate the properties of this excess, finding that it provides a very compelling peak in the di-photon invariant mass distribution and is physically separated in the razor plane from predicted background. Using another method of measuring the background and significance of the excess, we find a 2.5σ deviation from the Standard Model hypothesis over a broader range of the razor plane.
In the second part of the thesis we transform the problem of training a classifier to distinguish events with a Higgs boson decaying to two photons from events with other sources of photon pairs into the Hamiltonian of a spin system, the ground state of which is the best classifier. We then use a quantum annealer to find the ground state of this Hamiltonian and train the classifier. We find that we are able to do this successfully in less than 400 annealing runs for a problem of median difficulty at the largest problem size considered. The networks trained in this manner exhibit good classification performance, competitive with the more complicated machine learning techniques, and are highly resistant to overtraining. We also find that the nature of the training gives access to additional solutions that can be used to improve the classification performance by up to 1.2% in some regions.
Resumo:
The relatively large number of nearby radio-quiet and thermally emitting isolated neutron stars (INSs) discovered in the ROSAT All-Sky Survey, dubbed the ""Magnificent Seven"", suggests that they belong to a formerly neglected major component of the overall INS population. So far, attempts to discover similar INSs beyond the solar vicinity failed to confirm any reliable candidate. The good positional accuracy and soft X-ray sensitivity of the EPIC cameras onboard the XMM-Newton satellite allow us to efficiently search for new thermally emitting INSs. We used the 2XMMp catalogue to select sources with no catalogued candidate counterparts and with X-ray spectra similar to those of the Magnificent Seven, but seen at greater distances and thus undergoing higher interstellar absorptions. Identifications in more than 170 astronomical catalogues and visual screening allowed us to select fewer than 30 good INS candidates. In order to rule out alternative identifications, we obtained deep ESO-VLT and SOAR optical imaging for the X-ray brightest candidates. We report here on the optical follow-up results of our search and discuss the possible nature of 8 of our candidates. A high X-ray-to-optical flux ratio together with a stable flux and soft X-ray spectrum make the brightest source of our sample, 2XMM J104608.7-594306, a newly discovered thermally emitting INS. The X-ray source 2XMM J010642.3+005032 has no evident optical counterpart and should be further investigated. The remaining X-ray sources are most probably identified with cataclysmic variables and active galactic nuclei, as inferred from the colours and flux ratios of their likely optical counterparts. Beyond the finding of new thermally emitting INSs, our study aims at constraining the space density of this Galactic population at great distances and at determining whether their apparently high density is a local anomaly or not.
Resumo:
Dissertation presented at Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia in fulfilment of the requirements for the Masters degree in Mathematics and Applications, specialization in Actuarial Sciences, Statistics and Operations Research
Resumo:
Optimization is a very important field for getting the best possible value for the optimization function. Continuous optimization is optimization over real intervals. There are many global and local search techniques. Global search techniques try to get the global optima of the optimization problem. However, local search techniques are used more since they try to find a local minimal solution within an area of the search space. In Continuous Constraint Satisfaction Problems (CCSP)s, constraints are viewed as relations between variables, and the computations are supported by interval analysis. The continuous constraint programming framework provides branch-and-prune algorithms for covering sets of solutions for the constraints with sets of interval boxes which are the Cartesian product of intervals. These algorithms begin with an initial crude cover of the feasible space (the Cartesian product of the initial variable domains) which is recursively refined by interleaving pruning and branching steps until a stopping criterion is satisfied. In this work, we try to find a convenient way to use the advantages in CCSP branchand- prune with local search of global optimization applied locally over each pruned branch of the CCSP. We apply local search techniques of continuous optimization over the pruned boxes outputted by the CCSP techniques. We mainly use steepest descent technique with different characteristics such as penalty calculation and step length. We implement two main different local search algorithms. We use “Procure”, which is a constraint reasoning and global optimization framework, to implement our techniques, then we produce and introduce our results over a set of benchmarks.
Resumo:
A search for the Standard Model Higgs boson produced in association with a pair of top quarks, tt¯H, is presented. The analysis uses 20.3 fb−1 of pp collision data at s√ = 8 TeV, collected with the ATLAS detector at the Large Hadron Collider during 2012. The search is designed for the H to bb¯ decay mode and uses events containing one or two electrons or muons. In order to improve the sensitivity of the search, events are categorised according to their jet and b-tagged jet multiplicities. A neural network is used to discriminate between signal and background events, the latter being dominated by tt¯+jets production. In the single-lepton channel, variables calculated using a matrix element method are included as inputs to the neural network to improve discrimination of the irreducible tt¯+bb¯ background. No significant excess of events above the background expectation is found and an observed (expected) limit of 3.4 (2.2) times the Standard Model cross section is obtained at 95% confidence level. The ratio of the measured tt¯H signal cross section to the Standard Model expectation is found to be μ=1.5±1.1 assuming a Higgs boson mass of 125 GeV.
Resumo:
A generic search for anomalous production of events with at least three charged leptons is presented. The data sample consists of pp collisions at s√=8 TeV collected in 2012 by the ATLAS experiment at the CERN Large Hadron Collider, and corresponds to an integrated luminosity of 20.3 fb−1. Events are required to have at least three selected lepton candidates, at least two of which must be electrons or muons, while the third may be a hadronically decaying tau. Selected events are categorized based on their lepton flavour content and signal regions are constructed using several kinematic variables of interest. No significant deviations from Standard Model predictions are observed. Model-independent upper limits on contributions from beyond the Standard Model phenomena are provided for each signal region, along with prescription to re-interpret the limits for any model. Constraints are also placed on models predicting doubly charged Higgs bosons and excited leptons. For doubly charged Higgs bosons decaying to eτ or μτ, lower limits on the mass are set at 400 GeV at 95% confidence level. For excited leptons, constraints are provided as functions of both the mass of the excited state and the compositeness scale Λ, with the strongest mass constraints arising in regions where the mass equals Λ. In such scenarios, lower mass limits are set at 3.0 TeV for excited electrons and muons, 2.5 TeV for excited taus, and 1.6 TeV for every excited-neutrino flavour.
Resumo:
Forest fires are a serious threat to humans and nature from an ecological, social and economic point of view. Predicting their behaviour by simulation still delivers unreliable results and remains a challenging task. Latest approaches try to calibrate input variables, often tainted with imprecision, using optimisation techniques like Genetic Algorithms. To converge faster towards fitter solutions, the GA is guided with knowledge obtained from historical or synthetical fires. We developed a robust and efficient knowledge storage and retrieval method. Nearest neighbour search is applied to find the fire configuration from knowledge base most similar to the current configuration. Therefore, a distance measure was elaborated and implemented in several ways. Experiments show the performance of the different implementations regarding occupied storage and retrieval time with overly satisfactory results.