925 resultados para Search-based algorithms
Resumo:
This paper develops a model for military conflicts where the defending forces have to determine an optimal partitioning of available resources to counter attacks from an adversary in two different fronts. The Lanchester attrition model is used to develop the dynamical equations governing the variation in force strength. Three different allocation schemes - Time-Zero-Allocation (TZA), Allocate-Assess-Reallocate (AAR), and Continuous Constant Allocation (CCA) - are considered and the optimal solutions are obtained in each case. Numerical examples are given to support the analytical results.
Resumo:
We present a search for associated production of the standard model (SM) Higgs boson and a $Z$ boson where the $Z$ boson decays to two leptons and the Higgs decays to a pair of $b$ quarks in $p\bar{p}$ collisions at the Fermilab Tevatron. We use event probabilities based on SM matrix elements to construct a likelihood function of the Higgs content of the data sample. In a CDF data sample corresponding to an integrated luminosity of 2.7 fb$^{-1}$ we see no evidence of a Higgs boson with a mass between 100 GeV$/c^2$ and 150 GeV$/c^2$. We set 95% confidence level (C.L.) upper limits on the cross-section for $ZH$ production as a function of the Higgs boson mass $m_H$; the limit is 8.2 times the SM prediction at $m_H = 115$ GeV$/c^2$.
Resumo:
Two algorithms are outlined, each of which has interesting features for modeling of spatial variability of rock depth. In this paper, reduced level of rock at Bangalore, India, is arrived from the 652 boreholes data in the area covering 220 sqa <.km. Support vector machine (SVM) and relevance vector machine (RVM) have been utilized to predict the reduced level of rock in the subsurface of Bangalore and to study the spatial variability of the rock depth. The support vector machine (SVM) that is firmly based on the theory of statistical learning theory uses regression technique by introducing epsilon-insensitive loss function has been adopted. RVM is a probabilistic model similar to the widespread SVM, but where the training takes place in a Bayesian framework. Prediction results show the ability of learning machine to build accurate models for spatial variability of rock depth with strong predictive capabilities. The paper also highlights the capability ofRVM over the SVM model.
Resumo:
An analytical treatment of performance analysis of guidance laws is possible only in simplistic scenarios. As the complexity of the guidance system increases, a search for analytical solutions becomes quite impractical. In this paper, a new performance measure, based upon the notion of a timescale gap that can be computed through numerical simulations, is developed for performance analysis of guidance laws. Finite time Lyapunov exponents are used to define the timescale gap. It is shown that the timescale gap can be used for quantification of the rate of convergence of trajectories to the collision course. Comparisonbetween several guidance laws, based on the timescale gap, is presented. Realistic simulations to study the effect of aerodynamicsand atmospheric variations on the timescale gap of these guidance laws are also presented.
Resumo:
In this paper, we propose a self Adaptive Migration Model for Genetic Algorithms, where parameters of population size, the number of points of crossover and mutation rate for each population are fixed adaptively. Further, the migration of individuals between populations is decided dynamically. This paper gives a mathematical schema analysis of the method stating and showing that the algorithm exploits previously discovered knowledge for a more focused and concentrated search of heuristically high yielding regions while simultaneously performing a highly explorative search on the other regions of the search space. The effective performance of the algorithm is then shown using standard testbed functions, when compared with Island model GA(IGA) and Simple GA(SGA).
Resumo:
We present a search for the technicolor particles $\rho_{T}$ and $\pi_{T}$ in the process $p\bar{p} \to \rho_{T} \to W\pi_{T}$ at a center of mass energy of $\sqrt{s}=1.96 \mathrm{TeV}$. The search uses a data sample corresponding to approximately $1.9 \mathrm{fb}^{-1}$ of integrated luminosity accumulated by the CDF II detector at the Fermilab Tevatron. The event signature we consider is $W\to \ell\nu$ and $\pi_{T} \to b\bar{b}, b\bar{c}$ or $b\bar{u}$ depending on the $\pi_{T}$ charge. We select events with a single high-$p_T$ electron or muon, large missing transverse energy, and two jets. Jets corresponding to bottom quarks are identified with multiple $b$-tagging algorithms. The observed number of events and the invariant mass distributions are consistent with the standard model background expectations, and we exclude a region at 95% confidence level in the $\rho_T$-$\pi_T$ mass plane. As a result, a large fraction of the region $m(\rho_T) = 180$ - $250 \mathrm{GeV}/c^2$ and $m(\pi_T) = 95$ - $145 \mathrm{GeV}/c^2$ is excluded.
Resumo:
Statistical learning algorithms provide a viable framework for geotechnical engineering modeling. This paper describes two statistical learning algorithms applied for site characterization modeling based on standard penetration test (SPT) data. More than 2700 field SPT values (N) have been collected from 766 boreholes spread over an area of 220 sqkm area in Bangalore. To get N corrected value (N,), N values have been corrected (Ne) for different parameters such as overburden stress, size of borehole, type of sampler, length of connecting rod, etc. In three-dimensional site characterization model, the function N-c=N-c (X, Y, Z), where X, Y and Z are the coordinates of a point corresponding to N, value, is to be approximated in which N, value at any half-space point in Bangalore can be determined. The first algorithm uses least-square support vector machine (LSSVM), which is related to aridge regression type of support vector machine. The second algorithm uses relevance vector machine (RVM), which combines the strengths of kernel-based methods and Bayesian theory to establish the relationships between a set of input vectors and a desired output. The paper also presents the comparative study between the developed LSSVM and RVM model for site characterization. Copyright (C) 2009 John Wiley & Sons,Ltd.
Resumo:
A search for a narrow diphoton mass resonance is presented based on data from 3.0 fb^{-1} of integrated luminosity from p-bar p collisions at sqrt{s} = 1.96 TeV collected by the CDF experiment. No evidence of a resonance in the diphoton mass spectrum is observed, and upper limits are set on the cross section times branching fraction of the resonant state as a function of Higgs boson mass. The resulting limits exclude Higgs bosons with masses below 106 GeV at a 95% Bayesian credibility level (C.L.) for one fermiophobic benchmark model.
Resumo:
We present a search for WW and WZ production in final states that contain a charged lepton (electron or muon) and at least two jets, produced in sqrt(s) = 1.96 TeV ppbar collisions at the Fermilab Tevatron, using data corresponding to 1.2 fb-1 of integrated luminosity collected with the CDF II detector. Diboson production in this decay channel has yet to be observed at hadron colliders due to the large single W plus jets background. An artificial neural network has been developed to increase signal sensitivity, as compared with an event selection based on conventional cuts. We set a 95% confidence level upper limit of sigma_{WW}* BR(W->lnu,W->jets)+ sigma_{WZ}*BR(W->lnu,Z->jets)
Resumo:
We performed a signature-based search for long-lived charged massive particles (CHAMPs) produced in 1.0 $\rm{fb}^{-1}$ of $\bar{p}p$ collisions at $\sqrt{s}=1.96$ TeV, collected with the CDF II detector using a high transverse-momentum ($p_T$) muon trigger. The search used time-of-flight to isolate slowly moving, high-$p_T$ particles. One event passed our selection cuts with an expected background of $1.9 \pm 0.2$ events. We set an upper bound on the production cross section, and, interpreting this result within the context of a stable scalar top quark model, set a lower limit on the particle mass of 249 GeV/$c^2$ at 95% C.L.
Resumo:
We present a signature-based search for anomalous production of events containing a photon, two jets, of which at least one is identified as originating from a b quark, and missing transverse energy. The search uses data corresponding to 2.0/fb of integrated luminosity from p-pbar collisions at a center-of-mass energy of sqrt(s)=1.96 TeV, collected with the CDF II detector at the Fermilab Tevatron. From 6,697,466 events with a photon candidate with transverse energy ET> 25 GeV, we find 617 events with missing transverse energy > 25 GeV and two or more jets with ET> 15 GeV, at least one identified as originating from a b quark, versus an expectation of 607+- 113 events. Increasing the requirement on missing transverse energy to 50 GeV, we find 28 events versus an expectation of 30+-11 events. We find no indications of non-standard-model phenomena.
Resumo:
We present results of a signature-based search for new physics using a dijet plus missing transverse energy data sample collected in 2 fb-1 of p-pbar collisions at sqrt(s) = 1.96 TeV with the CDF II detector at the Fermilab Tevatron. We observe no significant event excess with respect to the standard model prediction and extract a 95% C.L. upper limit on the cross section times acceptance for a potential contribution from a non-standard model process. Based on this limit the mass of a first or second generation scalar leptoquark is constrained to be above 187 GeV/c^2.
Resumo:
We present the result of a search for a massive color-octet vector particle, (e.g. a massive gluon) decaying to a pair of top quarks in proton-antiproton collisions with a center-of-mass energy of 1.96 TeV. This search is based on 1.9 fb$^{-1}$ of data collected using the CDF detector during Run II of the Tevatron at Fermilab. We study $t\bar{t}$ events in the lepton+jets channel with at least one $b$-tagged jet. A massive gluon is characterized by its mass, decay width, and the strength of its coupling to quarks. These parameters are determined according to the observed invariant mass distribution of top quark pairs. We set limits on the massive gluon coupling strength for masses between 400 and 800 GeV$/c^2$ and width-to-mass ratios between 0.05 and 0.50. The coupling strength of the hypothetical massive gluon to quarks is consistent with zero within the explored parameter space.
Resumo:
In this article, the problem of two Unmanned Aerial Vehicles (UAVs) cooperatively searching an unknown region is addressed. The search region is discretized into hexagonal cells and each cell is assumed to possess an uncertainty value. The UAVs have to cooperatively search these cells taking limited endurance, sensor and communication range constraints into account. Due to limited endurance, the UAVs need to return to the base station for refuelling and also need to select a base station when multiple base stations are present. This article proposes a route planning algorithm that takes endurance time constraints into account and uses game theoretical strategies to reduce the uncertainty. The route planning algorithm selects only those cells that ensure the agent will return to any one of the available bases. A set of paths are formed using these cells which the game theoretical strategies use to select a path that yields maximum uncertainty reduction. We explore non-cooperative Nash, cooperative and security strategies from game theory to enhance the search effectiveness. Monte-Carlo simulations are carried out which show the superiority of the game theoretical strategies over greedy strategy for different look ahead step length paths. Within the game theoretical strategies, non-cooperative Nash and cooperative strategy perform similarly in an ideal case, but Nash strategy performs better than the cooperative strategy when the perceived information is different. We also propose a heuristic based on partitioning of the search space into sectors to reduce computational overhead without performance degradation.
Resumo:
Spike detection in neural recordings is the initial step in the creation of brain machine interfaces. The Teager energy operator (TEO) treats a spike as an increase in the `local' energy and detects this increase. The performance of TEO in detecting action potential spikes suffers due to its sensitivity to the frequency of spikes in the presence of noise which is present in microelectrode array (MEA) recordings. The multiresolution TEO (mTEO) method overcomes this shortcoming of the TEO by tuning the parameter k to an optimal value m so as to match to frequency of the spike. In this paper, we present an algorithm for the mTEO using the multiresolution structure of wavelets along with inbuilt lowpass filtering of the subband signals. The algorithm is efficient and can be implemented for real-time processing of neural signals for spike detection. The performance of the algorithm is tested on a simulated neural signal with 10 spike templates obtained from [14]. The background noise is modeled as a colored Gaussian random process. Using the noise standard deviation and autocorrelation functions obtained from recorded data, background noise was simulated by an autoregressive (AR(5)) filter. The simulations show a spike detection accuracy of 90%and above with less than 5% false positives at an SNR of 2.35 dB as compared to 80% accuracy and 10% false positives reported [6] on simulated neural signals.