872 resultados para direct search optimization algorithm
Resumo:
The results of a search for direct pair production of the scalar partner to the top quark using an integrated luminosity of 20.1fb−1 of proton-proton collision data at s√=8 TeV recorded with the ATLAS detector at the LHC are reported. The top squark is assumed to decay via t~→tχ~01 or t~→bχ~±1→bW(∗)χ~01, where χ~01 (χ~±1) denotes the lightest neutralino (chargino) in supersymmetric models. The search targets a fully-hadronic final state in events with four or more jets and large missing transverse momentum. No significant excess over the Standard Model background prediction is observed, and exclusion limits are reported in terms of the top squark and neutralino masses and as a function of the branching fraction of t~→tχ~01. For a branching fraction of 100%, top squark masses in the range 270-645 GeV are excluded for χ~01 masses below 30 GeV. For a branching fraction of 50% to either t~→tχ~01 or t~→bχ~±1, and assuming the χ~±1 mass to be twice the χ~01 mass, top squark masses in the range 250-550 GeV are excluded for χ~01 masses below 60 GeV.
Resumo:
Searches for the electroweak production of charginos, neutralinos and sleptons in final states characterized by the presence of two leptons (electrons and muons) and missing transverse momentum are performed using 20.3 fb−1 of proton-proton collision data at ps = 8TeV recorded with the ATLAS experiment at the Large Hadron Collider. No significant excess beyond Standard Model expectations is observed. Limits are set on the masses of the lightest chargino, next-to-lightest neutralino and sleptons for different lightest-neutralino mass hypotheses in simplified models. Results are also interpreted in various scenarios of the phenomenological Minimal Supersymmetric Standard Model.
Resumo:
A search is presented for direct top squark pair production using events with at least two leptons including a same-flavour opposite-sign pair with invariant mass consistent with the Z boson mass, jets tagged as originating from b-quarks and missing transverse momentum. The analysis is performed with proton–proton collision data at √ s = 8 TeV collected with the ATLAS detector at the LHC in 2012 corresponding to an integrated luminosity of 20.3 fb−1. No excess beyond the Standard Model expectation is observed. Interpretations of the results are provided in models based on the direct pair production of the heavier top squark state (˜t2) followed by the decay to the lighter top squark state (˜t1) via ˜t2 → Z ˜t1, and for ˜t1 pair production in natural gaugemediated supersymmetry breaking scenarios where the neutralino (˜χ 01 ) is the next-to-lightest supersymmetric particle and decays producing a Z boson and a gravitino ( ˜G ) via the ˜χ 01→ Z ˜G process.
Resumo:
A search is presented for direct top-squark pair production in final states with two leptons (electrons or muons) of opposite charge using 20.3 fb−1 of pp collision data at ps = 8TeV, collected by the ATLAS experiment at the Large Hadron Collider in 2012. No excess over the Standard Model expectation is found. The results are interpreted under the separate assumptions (i) that the top squark decays to a b-quark in addition to an on-shell chargino whose decay occurs via a real or virtual W boson, or (ii) that the top squark decays to a t-quark and the lightest neutralino. A top squark with a mass between 150 GeV and 445 GeV decaying to a b-quark and an on-shell chargino is excluded at 95% confidence level for a top squark mass equal to the chargino mass plus 10 GeV, in the case of a 1 GeV lightest neutralino. Top squarks with masses between 215 (90) GeV and 530 (170) GeV decaying to an on-shell (off-shell) t-quark and a neutralino are excluded at 95% confidence level for a 1 GeV neutralino.
Resumo:
A search for the direct production of charginos and neutralinos in final states with three leptons and missing transverse momentum is presented. The analysis is based on 20.3 fb−1 of √s = 8TeV proton-proton collision data delivered by the Large Hadron Collider and recorded with the ATLAS detector. Observations are consistent with the Standard Model expectations and limits are set in R-parity-conserving phenomenological Minimal Supersymmetric Standard Models and in simplified supersymmetric models, significantly extending previous results. For simplified supersymmetric models of direct chargino (˜χ±1 ) and next-to-lightest neutralino (˜χ02) production with decays to lightest neutralino(˜χ01) via either all three generations of sleptons, staus only, gauge bosons, or Higgs bosons, ˜χ±1 and ˜χ02 masses are excluded up to 700GeV, 380GeV, 345GeV, or 148GeV respectively, for a massless ˜χ01.
Resumo:
When considering data from many trials, it is likely that some of them present a markedly different intervention effect or exert an undue influence on the summary results. We develop a forward search algorithm for identifying outlying and influential studies in meta-analysis models. The forward search algorithm starts by fitting the hypothesized model to a small subset of likely outlier-free studies and proceeds by adding studies into the set one-by-one that are determined to be closest to the fitted model of the existing set. As each study is added to the set, plots of estimated parameters and measures of fit are monitored to identify outliers by sharp changes in the forward plots. We apply the proposed outlier detection method to two real data sets; a meta-analysis of 26 studies that examines the effect of writing-to-learn interventions on academic achievement adjusting for three possible effect modifiers, and a meta-analysis of 70 studies that compares a fluoride toothpaste treatment to placebo for preventing dental caries in children. A simple simulated example is used to illustrate the steps of the proposed methodology, and a small-scale simulation study is conducted to evaluate the performance of the proposed method. Copyright © 2016 John Wiley & Sons, Ltd.
Resumo:
This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centers from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.
Resumo:
The overarching goal of the Pathway Semantics Algorithm (PSA) is to improve the in silico identification of clinically useful hypotheses about molecular patterns in disease progression. By framing biomedical questions within a variety of matrix representations, PSA has the flexibility to analyze combined quantitative and qualitative data over a wide range of stratifications. The resulting hypothetical answers can then move to in vitro and in vivo verification, research assay optimization, clinical validation, and commercialization. Herein PSA is shown to generate novel hypotheses about the significant biological pathways in two disease domains: shock / trauma and hemophilia A, and validated experimentally in the latter. The PSA matrix algebra approach identified differential molecular patterns in biological networks over time and outcome that would not be easily found through direct assays, literature or database searches. In this dissertation, Chapter 1 provides a broad overview of the background and motivation for the study, followed by Chapter 2 with a literature review of relevant computational methods. Chapters 3 and 4 describe PSA for node and edge analysis respectively, and apply the method to disease progression in shock / trauma. Chapter 5 demonstrates the application of PSA to hemophilia A and the validation with experimental results. The work is summarized in Chapter 6, followed by extensive references and an Appendix with additional material.
Resumo:
This thesis deals with the problem of efficiently tracking 3D objects in sequences of images. We tackle the efficient 3D tracking problem by using direct image registration. This problem is posed as an iterative optimization procedure that minimizes a brightness error norm. We review the most popular iterative methods for image registration in the literature, turning our attention to those algorithms that use efficient optimization techniques. Two forms of efficient registration algorithms are investigated. The first type comprises the additive registration algorithms: these algorithms incrementally compute the motion parameters by linearly approximating the brightness error function. We centre our attention on Hager and Belhumeur’s factorization-based algorithm for image registration. We propose a fundamental requirement that factorization-based algorithms must satisfy to guarantee good convergence, and introduce a systematic procedure that automatically computes the factorization. Finally, we also bring out two warp functions to register rigid and nonrigid 3D targets that satisfy the requirement. The second type comprises the compositional registration algorithms, where the brightness function error is written by using function composition. We study the current approaches to compositional image alignment, and we emphasize the importance of the Inverse Compositional method, which is known to be the most efficient image registration algorithm. We introduce a new algorithm, the Efficient Forward Compositional image registration: this algorithm avoids the necessity of inverting the warping function, and provides a new interpretation of the working mechanisms of the inverse compositional alignment. By using this information, we propose two fundamental requirements that guarantee the convergence of compositional image registration methods. Finally, we support our claims by using extensive experimental testing with synthetic and real-world data. We propose a distinction between image registration and tracking when using efficient algorithms. We show that, depending whether the fundamental requirements are hold, some efficient algorithms are eligible for image registration but not for tracking.
Resumo:
This paper describes the basic tools to work with wireless sensors. TinyOShas a componentbased architecture which enables rapid innovation and implementation while minimizing code size as required by the severe memory constraints inherent in sensor networks. TinyOS's component library includes network protocols, distributed services, sensor drivers, and data acquisition tools ? all of which can be used asia or be further refined for a custom application. TinyOS was originally developed as a research project at the University of California Berkeley, but has since grown to have an international community of developers and users. Some algorithms concerning packet routing are shown. Incar entertainment systems can be based on wireless sensors in order to obtain information from Internet, but routing protocols must be implemented in order to avoid bottleneck problems. Ant Colony algorithms are really useful in such cases, therefore they can be embedded into the sensors to perform such routing task.
Resumo:
This paper describes new improvements for BB-MaxClique (San Segundo et al. in Comput Oper Resour 38(2):571–581, 2011 ), a leading maximum clique algorithm which uses bit strings to efficiently compute basic operations during search by bit masking. Improvements include a recently described recoloring strategy in Tomita et al. (Proceedings of the 4th International Workshop on Algorithms and Computation. Lecture Notes in Computer Science, vol 5942. Springer, Berlin, pp 191–203, 2010 ), which is now integrated in the bit string framework, as well as different optimization strategies for fast bit scanning. Reported results over DIMACS and random graphs show that the new variants improve over previous BB-MaxClique for a vast majority of cases. It is also established that recoloring is mainly useful for graphs with high densities.
Resumo:
An aerodynamic optimization of the train aerodynamic characteristics in term of front wind action sensitivity is carried out in this paper. In particular, a genetic algorithm (GA) is used to perform a shape optimization study of a high-speed train nose. The nose is parametrically defined via Bézier Curves, including a wider range of geometries in the design space as possible optimal solutions. Using a GA, the main disadvantage to deal with is the large number of evaluations need before finding such optimal. Here it is proposed the use of metamodels to replace Navier-Stokes solver. Among all the posibilities, Rsponse Surface Models and Artificial Neural Networks (ANN) are considered. Best results of prediction and generalization are obtained with ANN and those are applied in GA code. The paper shows the feasibility of using GA in combination with ANN for this problem, and solutions achieved are included.
Resumo:
Se presenta un nuevo método de diseño conceptual en Ingeniería Aeronáutica basado el uso de modelos reducidos, también llamados modelos sustitutos (‘surrogates’). Los ingredientes de la función objetivo se calculan para cada indiviudo mediante la utilización de modelos sustitutos asociados a las distintas disciplinas técnicas que se construyen mediante definiciones de descomposición en valores singulares de alto orden (HOSVD) e interpolaciones unidimensionales. Estos modelos sustitutos se obtienen a partir de un número limitado de cálculos CFD. Los modelos sustitutos pueden combinarse, bien con un método de optimización global de tipo algoritmo genético, o con un método local de tipo gradiente. El método resultate es flexible a la par que mucho más eficiente, computacionalmente hablando, que los modelos convencionales basados en el cálculo directo de la función objetivo, especialmente si aparecen un gran número de parámetros de diseño y/o de modelado. El método se ilustra considerando una versión simplificada del diseño conceptual de un avión. Abstract An optimization method for conceptual design in Aeronautics is presented that is based on the use of surrogate models. The various ingredients in the target function are calculated for each individual using surrogates of the associated technical disciplines that are constructed via high order singular value decomposition and one dimensional interpolation. These surrogates result from a limited number of CFD calculated snapshots. The surrogates are combined with an optimization method, which can be either a global optimization method such as a genetic algorithm or a local optimization method, such as a gradient-like method. The resulting method is both flexible and much more computationally efficient than the conventional method based on direct calculation of the target function, especially if a large number of free design parameters and/or tunablemodeling parameters are present. The method is illustrated considering a simplified version of the conceptual design of an aircraft empennage.
Resumo:
The algorithms and graphic user interface software package ?OPT-PROx? are developed to meet food engineering needs related to canned food thermal processing simulation and optimization. The adaptive random search algorithm and its modification coupled with penalty function?s approach, and the finite difference methods with cubic spline approximation are utilized by ?OPT-PROx? package (http://tomakechoice. com/optprox/index.html). The diversity of thermal food processing optimization problems with different objectives and required constraints are solvable by developed software. The geometries supported by the ?OPT-PROx? are the following: (1) cylinder, (2) rectangle, (3) sphere. The mean square error minimization principle is utilized in order to estimate the heat transfer coefficient of food to be heated under optimal condition. The developed user friendly dialogue and used numerical procedures makes the ?OPT-PROx? software useful to food scientists in research and education, as well as to engineers involved in optimization of thermal food processing.
Resumo:
Many macroscopic properties: hardness, corrosion, catalytic activity, etc. are directly related to the surface structure, that is, to the position and chemical identity of the outermost atoms of the material. Current experimental techniques for its determination produce a “signature” from which the structure must be inferred by solving an inverse problem: a solution is proposed, its corresponding signature computed and then compared to the experiment. This is a challenging optimization problem where the search space and the number of local minima grows exponentially with the number of atoms, hence its solution cannot be achieved for arbitrarily large structures. Nowadays, it is solved by using a mixture of human knowledge and local search techniques: an expert proposes a solution that is refined using a local minimizer. If the outcome does not fit the experiment, a new solution must be proposed again. Solving a small surface can take from days to weeks of this trial and error method. Here we describe our ongoing work in its solution. We use an hybrid algorithm that mixes evolutionary techniques with trusted region methods and reuses knowledge gained during the execution to avoid repeated search of structures. Its parallelization produces good results even when not requiring the gathering of the full population, hence it can be used in loosely coupled environments such as grids. With this algorithm, the solution of test cases that previously took weeks of expert time can be automatically solved in a day or two of uniprocessor time.