980 resultados para Search problems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many particles proposed by theories, such as GUT monopoles, nuclearites and 1/5 charge superstring particles, can be categorized as Slow-moving, Ionizing, Massive Particles (SIMPs).

Detailed calculations of the signal-to-noise ratios in vanous acoustic and mechanical methods for detecting such SIMPs are presented. It is shown that the previous belief that such methods are intrinsically prohibited by the thermal noise is incorrect, and that ways to solve the thermal noise problem are already within the reach of today's technology. In fact, many running and finished gravitational wave detection ( GWD) experiments are already sensitive to certain SIMPs. As an example, a published GWD result is used to obtain a flux limit for nuclearites.

The result of a search using a scintillator array on Earth's surface is reported. A flux limit of 4.7 x 10^(-12) cm^(-2)sr^(-1)s^(-1) (90% c.l.) is set for any SIMP with 2.7 x 10^(-4) less than β less than 5 x 10^(-3) and ionization greater than 1/3 of minimum ionizing muons. Although this limit is above the limits from underground experiments for typical supermassive particles (10^(16)GeV), it is a new limit in certain β and ionization regions for less massive ones (~10^9 GeV) not able to penetrate deep underground, and implies a stringent limit on the fraction of the dark matter that can be composed of massive electrically and/ or magnetically charged particles.

The prospect of the future SIMP search in the MACRO detector is discussed. The special problem of SIMP trigger is examined and a circuit proposed, which may solve most of the problems of the previous ones proposed or used by others and may even enable MACRO to detect certain SIMP species with β as low as the orbital velocity around the earth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a novel class of algorithms for the solution of scattering and eigenvalue problems on general two-dimensional domains under a variety of boundary conditions, including non-smooth domains and certain "Zaremba" boundary conditions - for which Dirichlet and Neumann conditions are specified on various portions of the domain boundary. The theoretical basis of the methods for the Zaremba problems on smooth domains concern detailed information, which is put forth for the first time in this thesis, about the singularity structure of solutions of the Laplace operator under boundary conditions of Zaremba type. The new methods, which are based on use of Green functions and integral equations, incorporate a number of algorithmic innovations, including a fast and robust eigenvalue-search algorithm, use of the Fourier Continuation method for regularization of all smooth-domain Zaremba singularities, and newly derived quadrature rules which give rise to high-order convergence even around singular points for the Zaremba problem. The resulting algorithms enjoy high-order convergence, and they can tackle a variety of elliptic problems under general boundary conditions, including, for example, eigenvalue problems, scattering problems, and, in particular, eigenfunction expansion for time-domain problems in non-separable physical domains with mixed boundary conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over a century of fi shery and oceanographic research conducted along the Atlantic coast of the United States has resulted in many publications using unofficial, and therefore unclear, geographic names for certain study areas. Such improper usage, besides being unscholarly, has and can lead to identification problems for readers unfamiliar with the area. Even worse, the use of electronic data bases and search engines can provide incomplete or confusing references when improper wording is used. The two terms used improperly most often are “Middle Atlantic Bight” and “South Atlantic Bight.” In general, the term “Middle Atlantic Bight” usually refers to an imprecise coastal area off the middle Atlantic states of New York, New Jersey, Delaware, Maryland, and Virginia, and the term “South Atlantic Bight” refers to the area off the southeastern states of North Carolina, South Carolina, Georgia, and Florida’s east coast.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design of pressurized water reactor reload cores is not only a formidable optimization problem but also, in many instances, a multiobjective problem. A genetic algorithm (GA) designed to perform true multiobjective optimization on such problems is described. Genetic algorithms simulate natural evolution. They differ from most optimization techniques by searching from one group of solutions to another, rather than from one solution to another. New solutions are generated by breeding from existing solutions. By selecting better (in a multiobjective sense) solutions as parents more often, the population can be evolved to reveal the trade-off surface between the competing objectives. An example illustrating the effectiveness of this novel method is presented and analyzed. It is found that in solving a reload design problem the algorithm evaluates a similar number of loading patterns to other state-of-the-art methods, but in the process reveals much more information about the nature of the problem being solved. The actual computational cost incurred depends: on the core simulator used; the GA itself is code independent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a novel information-theoretic approach for Bayesian optimization called Predictive Entropy Search (PES). At each iteration, PES selects the next evaluation point that maximizes the expected information gained with respect to the global maximum. PES codifies this intractable acquisition function in terms of the expected reduction in the differential entropy of the predictive distribution. This reformulation allows PES to obtain approximations that are both more accurate and efficient than other alternatives such as Entropy Search (ES). Furthermore, PES can easily perform a fully Bayesian treatment of the model hyperparameters while ES cannot. We evaluate PES in both synthetic and real-world applications, including optimization problems in machine learning, finance, biotechnology, and robotics. We show that the increased accuracy of PES leads to significant gains in optimization performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we study the efficacy of genetic algorithms in the context of combinatorial optimization. In particular, we isolate the effects of cross-over, treated as the central component of genetic search. We show that for problems of nontrivial size and difficulty, the contribution of cross-over search is marginal, both synergistically when run in conjunction with mutation and selection, or when run with selection alone, the reference point being the search procedure consisting of just mutation and selection. The latter can be viewed as another manifestation of the Metropolis process. Considering the high computational cost of maintaining a population to facilitate cross-over search, its marginal benefit renders genetic search inferior to its singleton-population counterpart, the Metropolis process, and by extension, simulated annealing. This is further compounded by the fact that many problems arising in practice may inherently require a large number of state transitions for a near-optimal solution to be found, making genetic search infeasible given the high cost of computing a single iteration in the enlarged state-space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose the development of a world wide web image search engine that crawls the web collecting information about the images it finds, computes the appropriate image decompositions and indices, and stores this extracted information for searches based on image content. Indexing and searching images need not require solving the image understanding problem. Instead, the general approach should be to provide an arsenal of image decompositions and discriminants that can be precomputed for images. At search time, users can select a weighted subset of these decompositions to be used for computing image similarity measurements. While this approach avoids the search-time-dependent problem of labeling what is important in images, it still holds several important problems that require further research in the area of query by image content. We briefly explore some of these problems as they pertain to shape.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis elaborates on the problem of preprocessing a large graph so that single-pair shortest-path queries can be answered quickly at runtime. Computing shortest paths is a well studied problem, but exact algorithms do not scale well to real-world huge graphs in applications that require very short response time. The focus is on approximate methods for distance estimation, in particular in landmarks-based distance indexing. This approach involves choosing some nodes as landmarks and computing (offline), for each node in the graph its embedding, i.e., the vector of its distances from all the landmarks. At runtime, when the distance between a pair of nodes is queried, it can be quickly estimated by combining the embeddings of the two nodes. Choosing optimal landmarks is shown to be hard and thus heuristic solutions are employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the techniques presented in this thesis is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach which considers selecting landmarks at random. Finally, they are applied in two important problems arising naturally in large-scale graphs, namely social search and community detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Much work has been done on learning from failure in search to boost solving of combinatorial problems, such as clause-learning and clause-weighting in boolean satisfiability (SAT), nogood and explanation-based learning, and constraint weighting in constraint satisfaction problems (CSPs). Many of the top solvers in SAT use clause learning to good effect. A similar approach (nogood learning) has not had as large an impact in CSPs. Constraint weighting is a less fine-grained approach where the information learnt gives an approximation as to which variables may be the sources of greatest contention. In this work we present two methods for learning from search using restarts, in order to identify these critical variables prior to solving. Both methods are based on the conflict-directed heuristic (weighted-degree heuristic) introduced by Boussemart et al. and are aimed at producing a better-informed version of the heuristic by gathering information through restarting and probing of the search space prior to solving, while minimizing the overhead of these restarts. We further examine the impact of different sampling strategies and different measurements of contention, and assess different restarting strategies for the heuristic. Finally, two applications for constraint weighting are considered in detail: dynamic constraint satisfaction problems and unary resource scheduling problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A common challenge that users of academic databases face is making sense of their query outputs for knowledge discovery. This is exacerbated by the size and growth of modern databases. PubMed, a central index of biomedical literature, contains over 25 million citations, and can output search results containing hundreds of thousands of citations. Under these conditions, efficient knowledge discovery requires a different data structure than a chronological list of articles. It requires a method of conveying what the important ideas are, where they are located, and how they are connected; a method of allowing users to see the underlying topical structure of their search. This paper presents VizMaps, a PubMed search interface that addresses some of these problems. Given search terms, our main backend pipeline extracts relevant words from the title and abstract, and clusters them into discovered topics using Bayesian topic models, in particular the Latent Dirichlet Allocation (LDA). It then outputs a visual, navigable map of the query results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new search-space-updating technique for genetic algorithms is proposed for continuous optimisation problems. Other than gradually reducing the search space during the evolution process with a fixed reduction rate set ‘a priori’, the upper and the lower boundaries for each variable in the objective function are dynamically adjusted based on its distribution statistics. To test the effectiveness, the technique is applied to a number of benchmark optimisation problems in comparison with three other techniques, namely the genetic algorithms with parameter space size adjustment (GAPSSA) technique [A.B. Djurišic, Elite genetic algorithms with adaptive mutations for solving continuous optimization problems – application to modeling of the optical constants of solids, Optics Communications 151 (1998) 147–159], successive zooming genetic algorithm (SZGA) [Y. Kwon, S. Kwon, S. Jin, J. Kim, Convergence enhanced genetic algorithm with successive zooming method for solving continuous optimization problems, Computers and Structures 81 (2003) 1715–1725] and a simple GA. The tests show that for well-posed problems, existing search space updating techniques perform well in terms of convergence speed and solution precision however, for some ill-posed problems these techniques are statistically inferior to a simple GA. All the tests show that the proposed new search space update technique is statistically superior to its counterparts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a random iterative graph based hyper-heuristic to produce a collection of heuristic sequences to construct solutions of different quality. These heuristic sequences can be seen as dynamic hybridisations of different graph colouring heuristics that construct solutions step by step. Based on these sequences, we statistically analyse the way in which graph colouring heuristics are automatically hybridised. This, to our knowledge, represents a new direction in hyper-heuristic research. It is observed that spending the search effort on hybridising Largest Weighted Degree with Saturation Degree at the early stage of solution construction tends to generate high quality solutions. Based on these observations, an iterative hybrid approach is developed to adaptively hybridise these two graph colouring heuristics at different stages of solution construction. The overall aim here is to automate the heuristic design process, which draws upon an emerging research theme on developing computer methods to design and adapt heuristics automatically. Experimental results on benchmark exam timetabling and graph colouring problems demonstrate the effectiveness and generality of this adaptive hybrid approach compared with previous methods on automatically generating and adapting heuristics. Indeed, we also show that the approach is competitive with the state of the art human produced methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present experimental results on benchmark problems in 3D cubic lattice structures with the Miyazawa-Jernigan energy function for two local search procedures that utilise the pull-move set: (i) population-based local search (PLS) that traverses the energy landscape with greedy steps towards (potential) local minima followed by upward steps up to a certain level of the objective function; (ii) simulated annealing with a logarithmic cooling schedule (LSA). The parameter settings for PLS are derived from short LSA-runs executed in pre-processing and the procedure utilises tabu lists generated for each member of the population. In terms of the total number of energy function evaluations both methods perform equally well, however. PLS has the potential of being parallelised with an expected speed-up in the region of the population size. Furthermore, both methods require a significant smaller number of function evaluations when compared to Monte Carlo simulations with kink-jump moves. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A technique for automatic exploration of the genetic search region through fuzzy coding (Sharma and Irwin, 2003) has been proposed. Fuzzy coding (FC) provides the value of a variable on the basis of the optimum number of selected fuzzy sets and their effectiveness in terms of degree-of-membership. It is an indirect encoding method and has been shown to perform better than other conventional binary, Gray and floating-point encoding methods. However, the static range of the membership functions is a major problem in fuzzy coding, resulting in longer times to arrive at an optimum solution in large or complicated search spaces. This paper proposes a new algorithm, called fuzzy coding with a dynamic range (FCDR), which dynamically allocates the range of the variables to evolve an effective search region, thereby achieving faster convergence. Results are presented for two benchmark optimisation problems, and also for a case study involving neural identification of a highly non-linear pH neutralisation process from experimental data. It is shown that dynamic exploration of the genetic search region is effective for parameter optimisation in problems where the search space is complicated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Early-onset child conduct problems are common and costly. A large number of studies and some previous reviews have focused on behavioural and cognitive-behavioural group-based parenting interventions, but methodological limitations are commonplace and evidence for the effectiveness and cost-effectiveness of these programmes has been unclear. To assess the effectiveness and cost-effectiveness of behavioural and cognitive-behavioural group-based parenting programmes for improving child conduct problems, parental mental health and parenting skills. We searched the following databases between 23 and 31 January 2011: CENTRAL (2011, Issue 1), MEDLINE (1950 to current), EMBASE (1980 to current), CINAHL (1982 to current), PsycINFO (1872 to current), Social Science Citation Index (1956 to current), ASSIA (1987 to current), ERIC (1966 to current), Sociological Abstracts (1963 to current), Academic Search Premier (1970 to current), Econlit (1969 to current), PEDE (1980 to current), Dissertations and Theses Abstracts (1980 to present), NHS EED (searched 31 January 2011), HEED (searched 31 January 2011), DARE (searched 31 January 2011), HTA (searched 31 January 2011), mRCT (searched 29 January 2011). We searched the following parent training websites on 31 January 2011: Triple P Library, Incredible Years Library and Parent Management Training. We also searched the reference lists of studies and reviews. We included studies if: (1) they involved randomised controlled trials (RCTs) or quasi-randomised controlled trials of behavioural and cognitive-behavioural group-based parenting interventions for parents of children aged 3 to 12 years with conduct problems, and (2) incorporated an intervention group versus a waiting list, no treatment or standard treatment control group. We only included studies that used at least one standardised instrument to measure child conduct problems. Two authors independently assessed the risk of bias in the trials and the methodological quality of health economic studies. Two authors also independently extracted data. We contacted study authors for additional information. This review includes 13 trials (10 RCTs and three quasi-randomised trials), as well as two economic evaluations based on two of the trials. Overall, there were 1078 participants (646 in the intervention group; 432 in the control group). The results indicate that parent training produced a statistically significant reduction in child conduct problems, whether assessed by parents (standardised mean difference (SMD) -0.53; 95% confidence interval (CI) -0.72 to -0.34) or independently assessed (SMD -0.44; 95% CI -0.77 to -0.11). The intervention led to statistically significant improvements in parental mental health (SMD -0.36; 95% CI -0.52 to -0.20) and positive parenting skills, based on both parent reports (SMD -0.53; 95% CI -0.90 to -0.16) and independent reports (SMD -0.47; 95% CI -0.65 to -0.29). Parent training also produced a statistically significant reduction in negative or harsh parenting practices according to both parent reports (SMD -0.77; 95% CI -0.96 to -0.59) and independent assessments (SMD -0.42; 95% CI -0.67 to -0.16). Moreover, the intervention demonstrated evidence of cost-effectiveness. When compared to a waiting list control group, there was a cost of approximately $2500 (GBP 1712; EUR 2217) per family to bring the average child with clinical levels of conduct problems into the non-clinical range. These costs of programme delivery are modest when compared with the long-term health, social, educational and legal costs associated with childhood conduct problems. Behavioural and cognitive-behavioural group-based parenting interventions are effective and cost-effective for improving child conduct problems, parental mental health and parenting skills in the short term. The cost of programme delivery was modest when compared with the long-term health, social, educational and legal costs associated with childhood conduct problems. Further research is needed on the long-term assessment of outcomes.