917 resultados para MODEL SEARCH


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genetic algorithms (GAs) are search methods that are being employed in a multitude of applications with extremely large search spaces. Recently, there has been considerable interest among GA researchers in understanding and formalizing the working of GAs. In an earlier paper, we have introduced the notion of binomially distributed populations as the central idea behind an exact ''populationary'' model of the large-population dynamics of the GA operators for objective functions called ''functions of unitation.'' In this paper, we extend this populationary model of GA dynamics to a more general class of objective functions called functions of unitation variables. We generalize the notion of a binomially distributed population to a generalized binomially distributed population (GBDP). We show that the effects of selection, crossover, and mutation can be exactly modelled after decomposing the population into GBDPs. Based on this generalized model, we have implemented a GA simulator for functions of two unitation variables-GASIM 2, and the distributions predicted by GASIM 2 match with those obtained from actual GA runs. The generalized populationary model of GA dynamics not only presents a novel and natural way of interpreting the workings of GAs with large populations, but it also provides for an efficient implementation of the model as a GA simulator. (C) Elsevier Science Inc. 1997.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bid optimization is now becoming quite popular in sponsored search auctions on the Web. Given a keyword and the maximum willingness to pay of each advertiser interested in the keyword, the bid optimizer generates a profile of bids for the advertisers with the objective of maximizing customer retention without compromising the revenue of the search engine. In this paper, we present a bid optimization algorithm that is based on a Nash bargaining model where the first player is the search engine and the second player is a virtual agent representing all the bidders. We make the realistic assumption that each bidder specifies a maximum willingness to pay values and a discrete, finite set of bid values. We show that the Nash bargaining solution for this problem always lies on a certain edge of the convex hull such that one end point of the edge is the vector of maximum willingness to pay of all the bidders. We show that the other endpoint of this edge can be computed as a solution of a linear programming problem. We also show how the solution can be transformed to a bid profile of the advertisers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The lifetime calculation of large dense sensor networks with fixed energy resources and the remaining residual energy have shown that for a constant energy resource in a sensor network the fault rate at the cluster head is network size invariant when using the network layer with no MAC losses.Even after increasing the battery capacities in the nodes the total lifetime does not increase after a max limit of 8 times. As this is a serious limitation lots of research has been done at the MAC layer which allows to adapt to the specific connectivity, traffic and channel polling needs for sensor networks. There have been lots of MAC protocols which allow to control the channel polling of new radios which are available to sensor nodes to communicate. This further reduces the communication overhead by idling and sleep scheduling thus extending the lifetime of the monitoring application. We address the two issues which effects the distributed characteristics and performance of connected MAC nodes. (1) To determine the theoretical minimum rate based on joint coding for a correlated data source at the singlehop, (2a) to estimate cluster head errors using Bayesian rule for routing using persistence clustering when node densities are the same and stored using prior probability at the network layer, (2b) to estimate the upper bound of routing errors when using passive clustering were the node densities at the multi-hop MACS are unknown and not stored at the multi-hop nodes a priori. In this paper we evaluate many MAC based sensor network protocols and study the effects on sensor network lifetime. A renewable energy MAC routing protocol is designed when the probabilities of active nodes are not known a priori. From theoretical derivations we show that for a Bayesian rule with known class densities of omega1, omega2 with expected error P* is bounded by max error rate of P=2P* for single-hop. We study the effects of energy losses using cross-layer simulation of - large sensor network MACS setup, the error rate which effect finding sufficient node densities to have reliable multi-hop communications due to unknown node densities. The simulation results show that even though the lifetime is comparable the expected Bayesian posterior probability error bound is close or higher than Pges2P*.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is important to know and to quantify the liquid holdups both dynamic and static at local levels as it will lead to understand various blast furnace phenomena properly such as slag/metal.gas.solid reactions, gas flow behaviour and interfacial area between the gas/solid/liquid. In the present study, considering the importance of local liquid holdup and non-availability of holdup data in these systems, an attempt has been made to quantify the local holdups in the dropping and around raceway zones in a cold model study using a non-wetting packing for liquid. In order to quantify the liquid holdups at microscopic level, a previously developed technique, X-ray radiography, has been used. It is observed that the liquid flows in preferred paths or channels which carry droplets/rivulets. It has been found that local holdup in some regions of the packed bed is much higher than average at a particular flow rate and this can have important consequences for the correct modelling of such systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a visual search problem studied by Sripati and Olson where the objective is to identify an oddball image embedded among multiple distractor images as quickly as possible. We model this visual search task as an active sequential hypothesis testing problem (ASHT problem). Chernoff in 1959 proposed a policy in which the expected delay to decision is asymptotically optimal. The asymptotics is under vanishing error probabilities. We first prove a stronger property on the moments of the delay until a decision, under the same asymptotics. Applying the result to the visual search problem, we then propose a ``neuronal metric'' on the measured neuronal responses that captures the discriminability between images. From empirical study we obtain a remarkable correlation (r = 0.90) between the proposed neuronal metric and speed of discrimination between the images. Although this correlation is lower than with the L-1 metric used by Sripati and Olson, this metric has the advantage of being firmly grounded in formal decision theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single features such as line orientation and length are known to guide visual search, but relatively little is known about how multiple features combine in search. To address this question, we investigated how search for targets differing in multiple features ( intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We tested race models (based on reaction times) and coactivation models ( based on reciprocal of reaction times) for their ability to predict multiple feature searches. Multiple feature searches were best accounted for by a co-activation model in which feature information combined linearly (r = 0.95). This result agrees with the classic finding that these features are separable i.e., subjective dissimilarity ratings sum linearly. We then replicated the classical finding that the length and width of a rectangle are integral features-in other words, they combine nonlinearly in visual search. However, to our surprise, upon including aspect ratio as an additional feature, length and width combined linearly and this model outperformed all other models. Thus, length and width of a rectangle became separable when considered together with aspect ratio. This finding predicts that searches involving shapes with identical aspect ratio should be more difficult than searches where shapes differ in aspect ratio. We confirmed this prediction on a variety of shapes. We conclude that features in visual search co-activate linearly and demonstrate for the first time that aspect ratio is a novel feature that guides visual search.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of scaling up data integration, such that new sources can be quickly utilized as they are discovered, remains elusive: Global schemas for integrated data are difficult to develop and expand, and schema and record matching techniques are limited by the fact that data and metadata are often under-specified and must be disambiguated by data experts. One promising approach is to avoid using a global schema, and instead to develop keyword search-based data integration-where the system lazily discovers associations enabling it to join together matches to keywords, and return ranked results. The user is expected to understand the data domain and provide feedback about answers' quality. The system generalizes such feedback to learn how to correctly integrate data. A major open challenge is that under this model, the user only sees and offers feedback on a few ``top-'' results: This result set must be carefully selected to include answers of high relevance and answers that are highly informative when feedback is given on them. Existing systems merely focus on predicting relevance, by composing the scores of various schema and record matching algorithms. In this paper, we show how to predict the uncertainty associated with a query result's score, as well as how informative feedback is on a given result. We build upon these foundations to develop an active learning approach to keyword search-based data integration, and we validate the effectiveness of our solution over real data from several very different domains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we search for the regions of the phenomenological minimal supersymmetric standard model (pMSSM) parameter space where one can expect to have moderate Higgs mixing angle (alpha) with relatively light (up to 600 GeV) additional Higgses after satisfying the current LHC data. We perform a global fit analysis using most updated data (till December 2014) from the LHC and Tevatron experiments. The constraints coming from the precision measurements of the rare b-decays B-s -> mu(+)mu(-) and b -> s gamma are also considered. We find that low M-A(less than or similar to 350) and high tan beta(greater than or similar to 25) regions are disfavored by the combined effect of the global analysis and flavor data. However, regions with Higgs mixing angle alpha similar to 0.1-0.8 are still allowed by the current data. We then study the existing direct search bounds on the heavy scalar/pseudoscalar (H/A) and charged Higgs boson (H-+/-) masses and branchings at the LHC. It has been found that regions with low to moderate values of tan beta with light additional Higgses (mass <= 600 GeV) are unconstrained by the data, while the regions with tan beta > 20 are excluded considering the direct search bounds by the LHC-8 data. The possibility to probe the region with tan beta <= 20 at the high luminosity run of LHC are also discussed, giving special attention to the H -> hh, H/A -> t (t) over bar and H/A -> tau(+)tau(-) decay modes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study an s-channel resonance R as a viable candidate to fit the diboson excess reported by ATLAS. We compute the contribution of the similar to 2 TeV resonance R to semileptonic and leptonic final states at the 13 TeV LHC. To explain the absence of an excess in the semileptonic channel, we explore the possibility where the particle R decays to additional light scalars X, X or X, Y. A modified analysis strategy has been proposed to study the three-particle final state of the resonance decay and to identify decay channels of X. Associated production of R with gauge bosons has been studied in detail to identify the production mechanism of R. We construct comprehensive categories for vector and scalar beyond-standard-model particles which may play the role of particles R, X, Y and find alternate channels to fix the new couplings and search for these particles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part I of the thesis describes the olfactory searching and scanning behaviors of rats in a wind tunnel, and a detailed movement analysis of terrestrial arthropod olfactory scanning behavior. Olfactory scanning behaviors in rats may be a behavioral correlate to hippocampal place cell activity.

Part II focuses on the organization of olfactory perception, what it suggests about a natural order for chemicals in the environment, and what this in tum suggests about the organization of the olfactory system. A model of odor quality space (analogous to the "color wheel") is presented. This model defines relationships between odor qualities perceived by human subjects based on a quantitative similarity measure. Compounds containing Carbon, Nitrogen, or Sulfur elicit odors that are contiguous in this odor representation, which thus allows one to predict the broad class of odor qualities a compound is likely to elicit. Based on these findings, a natural organization for olfactory stimuli is hypothesized: the order provided by the metabolic process. This hypothesis is tested by comparing compounds that are structurally similar, perceptually similar, and metabolically similar in a psychophysical cross-adaptation paradigm. Metabolically similar compounds consistently evoked shifts in odor quality and intensity under cross-adaptation, while compounds that were structurally similar or perceptually similar did not. This suggests that the olfactory system may process metabolically similar compounds using the same neural pathways, and that metabolic similarity may be the fundamental metric about which olfactory processing is organized. In other words, the olfactory system may be organized around a biological basis.

The idea of a biological basis for olfactory perception represents a shift in how olfaction is understood. The biological view has predictive power while the current chemical view does not, and the biological view provides explanations for some of the most basic questions in olfaction, that are unanswered in the chemical view. Existing data do not disprove a biological view, and are consistent with basic hypotheses that arise from this viewpoint.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A search for dielectron decays of heavy neutral resonances has been performed using proton-proton collision data collected at √s = 7 TeV by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) in 2011. The data sample corresponds to an integrated luminosity of 5 fb−1. The dielectron mass distribution is consistent with Standard Model (SM) predictions. An upper limit on the ratio of the cross section times branching fraction of new bosons, normalized to the cross section times branching fraction of the Z boson, is set at the 95 % confidence level. This result is translated into limits on the mass of new neutral particles at the level of 2120 GeV for the Z′ in the Sequential Standard Model, 1810 GeV for the superstring-inspired Z′ψ resonance, and 1940 (1640) GeV for Kaluza-Klein gravitons with the coupling parameter k/MPl of 0.10 (0.05).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An array of two spark chambers and six trays of plastic scintillation counters was used to search for unaccompanied fractionally charged particles in cosmic rays near sea level. No acceptable events were found with energy losses by ionization between 0.04 and 0.7 that of unit-charged minimum-ionizing particles. New 90%-confidence upper limits were thereby established for the fluxes of fractionally charged particles in cosmic rays, namely, (1.04 ± 0.07)x10-10 and (2.03 ± 0.16)x10-10 cm-2sr-1sec-1 for minimum-ionizing particles with charges 1/3 and 2/3, respectively.

In order to be certain that the spark chambers could have functioned for the low levels of ionization expected from particles with small fractional charges, tests were conducted to estimate the efficiency of the chambers as they had been used in this experiment. These tests showed that the spark-chamber system with the track-selection criteria used might have been over 99% efficient for the entire range of energy losses considered.

Lower limits were then obtained for the mass of a quark by considering the above flux limits and a particular model for the production of quarks in cosmic rays. In this model, which is one involving the multi-peripheral Regge hypothesis, the production cross section and a corresponding mass limit are critically dependent on the Regge trajectory assigned to a quark. If quarks are "elementary'' with a flat trajectory, the mass of a quark can be expected to be at least 6 ± 2 BeV/c2. If quarks have a trajectory with unit slope, just as the existing hadrons do, the mass of a quark might be as small as 1.3 ± 0.2 BeV/c2. For a trajectory with unit slope and a mass larger than a couple of BeV/c2, the production cross section may be so low that quarks might never be observed in nature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Atlantic Croaker (Micropogonias undulatus) production dynamics along the U.S. Atlantic coast are regulated by fishing and winter water temperature. Stakeholders for this resource have recommended investigating the effects of climate covariates in assessment models. This study used state-space biomass dynamic models without (model 1) and with (model 2) the minimum winter estuarine temperature (MWET) to examine MWET effects on Atlantic Croaker population dynamics during 1972–2008. In model 2, MWET was introduced into the intrinsic rate of population increase (r). For both models, a prior probability distribution (prior) was constructed for r or a scaling parameter (r0); imputs were the fishery removals, and fall biomass indices developed by using data from the Multispecies Bottom Trawl Survey of the Northeast Fisheries Science Center, National Marine Fisheries Service, and the Coastal Trawl Survey of the Southeast Area Monitoring and Assessment Program. Model sensitivity runs incorporated a uniform (0.01,1.5) prior for r or r0 and bycatch data from the shrimp-trawl fishery. All model variants produced similar results and therefore supported the conclusion of low risk of overfishing for the Atlantic Croaker stock in the 2000s. However, the data statistically supported only model 1 and its configuration that included the shrimp-trawl fishery bycatch. The process errors of these models showed slightly positive and significant correlations with MWET, indicating that warmer winters would enhance Atlantic Croaker biomass production. Inconclusive, somewhat conflicting results indicate that biomass dynamic models should not integrate MWET, pending, perhaps, accumulation of longer time series of the variables controlling the production dynamics of Atlantic Croaker, preferably including winter-induced estimates of Atlantic Croaker kills.