977 resultados para Search problems
Resumo:
We propose a stylized model of a problem-solving organization whoseinternal communication structure is given by a fixed network. Problemsarrive randomly anywhere in this network and must find their way to theirrespective specialized solvers by relying on local information alone.The organization handles multiple problems simultaneously. For this reason,the process may be subject to congestion. We provide a characterization ofthe threshold of collapse of the network and of the stock of foatingproblems (or average delay) that prevails below that threshold. We buildupon this characterization to address a design problem: the determinationof what kind of network architecture optimizes performance for any givenproblem arrival rate. We conclude that, for low arrival rates, the optimalnetwork is very polarized (i.e. star-like or centralized ), whereas it islargely homogenous (or decentralized ) for high arrival rates. We also showthat, if an auxiliary assumption holds, the transition between these twoopposite structures is sharp and they are the only ones to ever qualify asoptimal.
Resumo:
The set covering problem is an NP-hard combinatorial optimization problemthat arises in applications ranging from crew scheduling in airlines todriver scheduling in public mass transport. In this paper we analyze searchspace characteristics of a widely used set of benchmark instances throughan analysis of the fitness-distance correlation. This analysis shows thatthere exist several classes of set covering instances that have a largelydifferent behavior. For instances with high fitness distance correlation,we propose new ways of generating core problems and analyze the performanceof algorithms exploiting these core problems.
Resumo:
We introduce a width parameter that bounds the complexity of classical planning problems and domains, along with a simple but effective blind-search procedure that runs in time that is exponential in the problem width. We show that many benchmark domains have a bounded and small width provided thatgoals are restricted to single atoms, and hence that such problems are provably solvable in low polynomial time. We then focus on the practical value of these ideas over the existing benchmarks which feature conjunctive goals. We show that the blind-search procedure can be used for both serializing the goal into subgoals and for solving the resulting problems, resulting in a ‘blind’ planner that competes well with a best-first search planner guided by state-of-the-art heuristics. In addition, ideas like helpful actions and landmarks can be integrated as well, producing a planner with state-of-the-art performance.
Resumo:
This master’s thesis aims to study and represent from literature how evolutionary algorithms are used to solve different search and optimisation problems in the area of software engineering. Evolutionary algorithms are methods, which imitate the natural evolution process. An artificial evolution process evaluates fitness of each individual, which are solution candidates. The next population of candidate solutions is formed by using the good properties of the current population by applying different mutation and crossover operations. Different kinds of evolutionary algorithm applications related to software engineering were searched in the literature. Applications were classified and represented. Also the necessary basics about evolutionary algorithms were presented. It was concluded, that majority of evolutionary algorithm applications related to software engineering were about software design or testing. For example, there were applications about classifying software production data, project scheduling, static task scheduling related to parallel computing, allocating modules to subsystems, N-version programming, test data generation and generating an integration test order. Many applications were experimental testing rather than ready for real production use. There were also some Computer Aided Software Engineering tools based on evolutionary algorithms.
Resumo:
This thesis studies the problems and their reasons a software architect faces in his work. The purpose of the study is to search and identify potential factors causing problens in system integration and software engineering. Under a special interest are non-technical factors causing different kinds of problems. Thesis was executed by interviewing professionals that took part in e-commerce project in some corporation. Interviewed professionals consisted of architects from technical implementation projects, corporation's architect team leader, different kind of project managers and CRM manager. A specific theme list was used as an guidance of the interviews. Recorded interviews were transcribed and then classified using ATLAS.ti software. Basics of e-commerce, software engineering and system integration is described too. Differences between e-commerce and e-business as well as traditional business are represented as are basic types of e-commerce. Software's life span, general problems of software engineering and software design are covered concerning software engineering. In addition, general problems of the system integration and the special requirements set by e-commerce are described in the thesis. In the ending there is a part where the problems founded in study are described and some areas of software engineering where some development could be done so that same kind of problems could be avoided in the future.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
This dissertation explores the use of internal and external sources of knowledge in modern innovation processes. It builds on a framework that combines theories such as a behavioural theory of the firm, the evolutionary theory of economic change, and modern approaches to strategic management. It follows the recent increase in innovation research focusing on the firm-level examination of innovative activities instead of traditional industry-level determinants. The innovation process is seen as a problem- and slack- driven search process, which can take several directions in terms of organizational boundaries in the pursuit of new knowledge and other resources. It thus draws on recent models of technological change, according to which firms nowadays should build their innovative activities on both internal and external sources of innovation rather than relying solely on internal resources. Four different research questions are addressed, all of which are empirically investigated via a rich dataset covering Finnish innovators collected by Statistics Finland. Firstly, the study examines how the nature of problems shapes the direction of any search for new knowledge. In general it demonstrates that the nature of the problem does affect the direction of the search, although under resource constraints firms tend to use external rather than internal sources of knowledge. At the same time, it shows that those firms that are constrained in terms of finance seem to search both internally and externally. Secondly, the dissertation investigates the relationships between different kinds of internal and external sources of knowledge in an attempt to find out where firms should direct their search in order to exploit the potential of a distributed innovation process. The concept of complementarities is applied in this context. The third research question concerns how the use of external knowledge sources – openness to external knowledge – influences the financial performance of firms. Given the many advantages of openness presented in the current literature, the focus is on how it shapes profitability. The results reveal a curvilinear relationship between profitability and openness (taking an inverted U-shape), the implication being that it pays to be open up to a certain point, but being too open to external sources may be detrimental to financial performance. Finally, the dissertation addresses some challenges in CISbased innovation research that have received relatively little attention in prior studies. The general aim is to underline the fact that comprehensive understanding of the complex process of technological change requires the constant development of methodological approaches (in terms of data and measures, for example). All the empirical analyses included in the dissertation are based on the Finnish CIS (Finnish Innovation Survey 1998-2000).
Resumo:
This thesis introduces the Salmon Algorithm, a search meta-heuristic which can be used for a variety of combinatorial optimization problems. This algorithm is loosely based on the path finding behaviour of salmon swimming upstream to spawn. There are a number of tunable parameters in the algorithm, so experiments were conducted to find the optimum parameter settings for different search spaces. The algorithm was tested on one instance of the Traveling Salesman Problem and found to have superior performance to an Ant Colony Algorithm and a Genetic Algorithm. It was then tested on three coding theory problems - optimal edit codes, optimal Hamming distance codes, and optimal covering codes. The algorithm produced improvements on the best known values for five of six of the test cases using edit codes. It matched the best known results on four out of seven of the Hamming codes as well as three out of three of the covering codes. The results suggest the Salmon Algorithm is competitive with established guided random search techniques, and may be superior in some search spaces.
Resumo:
Elementary teachers are expected to prepare students to work efficiently with others, solve complex problems and self-regulate their own learning. Considering the importance of a solid educational foundation in the early years, students would benefit if elementary teachers engaged in scholarly teaching. The purpose of this study was to investigate Boyer’s (1990) four dimensions of scholarship, application, integration, teaching and discovery, to better understand if there is scholarly teaching in elementary education. Four professional teaching documents were analyzed using a hermeneutic orientation. A deductive analysis suggests that we do have scholarly teaching in elementary education, with strong evidence that elementary teachers are scholars of application and integration. An inductive analysis of latent and manifest content suggests that underlying humanistic values run deeply through elementary education driving current curricular, instructional and pedagogical practices.
Resumo:
Thèse réalisée en cotutelle avec l'Université d'Avignon.
Resumo:
Le problème de tournées de véhicules (VRP), introduit par Dantzig and Ramser en 1959, est devenu l'un des problèmes les plus étudiés en recherche opérationnelle, et ce, en raison de son intérêt méthodologique et de ses retombées pratiques dans de nombreux domaines tels que le transport, la logistique, les télécommunications et la production. L'objectif général du VRP est d'optimiser l'utilisation des ressources de transport afin de répondre aux besoins des clients tout en respectant les contraintes découlant des exigences du contexte d’application. Les applications réelles du VRP doivent tenir compte d’une grande variété de contraintes et plus ces contraintes sont nombreuse, plus le problème est difficile à résoudre. Les VRPs qui tiennent compte de l’ensemble de ces contraintes rencontrées en pratique et qui se rapprochent des applications réelles forment la classe des problèmes ‘riches’ de tournées de véhicules. Résoudre ces problèmes de manière efficiente pose des défis considérables pour la communauté de chercheurs qui se penchent sur les VRPs. Cette thèse, composée de deux parties, explore certaines extensions du VRP vers ces problèmes. La première partie de cette thèse porte sur le VRP périodique avec des contraintes de fenêtres de temps (PVRPTW). Celui-ci est une extension du VRP classique avec fenêtres de temps (VRPTW) puisqu’il considère un horizon de planification de plusieurs jours pendant lesquels les clients n'ont généralement pas besoin d’être desservi à tous les jours, mais plutôt peuvent être visités selon un certain nombre de combinaisons possibles de jours de livraison. Cette généralisation étend l'éventail d'applications de ce problème à diverses activités de distributions commerciales, telle la collecte des déchets, le balayage des rues, la distribution de produits alimentaires, la livraison du courrier, etc. La principale contribution scientifique de la première partie de cette thèse est le développement d'une méta-heuristique hybride dans la quelle un ensemble de procédures de recherche locales et de méta-heuristiques basées sur les principes de voisinages coopèrent avec un algorithme génétique afin d’améliorer la qualité des solutions et de promouvoir la diversité de la population. Les résultats obtenus montrent que la méthode proposée est très performante et donne de nouvelles meilleures solutions pour certains grands exemplaires du problème. La deuxième partie de cette étude a pour but de présenter, modéliser et résoudre deux problèmes riches de tournées de véhicules, qui sont des extensions du VRPTW en ce sens qu'ils incluent des demandes dépendantes du temps de ramassage et de livraison avec des restrictions au niveau de la synchronization temporelle. Ces problèmes sont connus respectivement sous le nom de Time-dependent Multi-zone Multi-Trip Vehicle Routing Problem with Time Windows (TMZT-VRPTW) et de Multi-zone Mult-Trip Pickup and Delivery Problem with Time Windows and Synchronization (MZT-PDTWS). Ces deux problèmes proviennent de la planification des opérations de systèmes logistiques urbains à deux niveaux. La difficulté de ces problèmes réside dans la manipulation de deux ensembles entrelacés de décisions: la composante des tournées de véhicules qui vise à déterminer les séquences de clients visités par chaque véhicule, et la composante de planification qui vise à faciliter l'arrivée des véhicules selon des restrictions au niveau de la synchronisation temporelle. Auparavant, ces questions ont été abordées séparément. La combinaison de ces types de décisions dans une seule formulation mathématique et dans une même méthode de résolution devrait donc donner de meilleurs résultats que de considérer ces décisions séparément. Dans cette étude, nous proposons des solutions heuristiques qui tiennent compte de ces deux types de décisions simultanément, et ce, d'une manière complète et efficace. Les résultats de tests expérimentaux confirment la performance de la méthode proposée lorsqu’on la compare aux autres méthodes présentées dans la littérature. En effet, la méthode développée propose des solutions nécessitant moins de véhicules et engendrant de moindres frais de déplacement pour effectuer efficacement la même quantité de travail. Dans le contexte des systèmes logistiques urbains, nos résultats impliquent une réduction de la présence de véhicules dans les rues de la ville et, par conséquent, de leur impact négatif sur la congestion et sur l’environnement.
Resumo:
Assembly job shop scheduling problem (AJSP) is one of the most complicated combinatorial optimization problem that involves simultaneously scheduling the processing and assembly operations of complex structured products. The problem becomes even more complicated if a combination of two or more optimization criteria is considered. This thesis addresses an assembly job shop scheduling problem with multiple objectives. The objectives considered are to simultaneously minimizing makespan and total tardiness. In this thesis, two approaches viz., weighted approach and Pareto approach are used for solving the problem. However, it is quite difficult to achieve an optimal solution to this problem with traditional optimization approaches owing to the high computational complexity. Two metaheuristic techniques namely, genetic algorithm and tabu search are investigated in this thesis for solving the multiobjective assembly job shop scheduling problems. Three algorithms based on the two metaheuristic techniques for weighted approach and Pareto approach are proposed for the multi-objective assembly job shop scheduling problem (MOAJSP). A new pairing mechanism is developed for crossover operation in genetic algorithm which leads to improved solutions and faster convergence. The performances of the proposed algorithms are evaluated through a set of test problems and the results are reported. The results reveal that the proposed algorithms based on weighted approach are feasible and effective for solving MOAJSP instances according to the weight assigned to each objective criterion and the proposed algorithms based on Pareto approach are capable of producing a number of good Pareto optimal scheduling plans for MOAJSP instances.
Resumo:
Empirical orthogonal functions (EOFs) are widely used in climate research to identify dominant patterns of variability and to reduce the dimensionality of climate data. EOFs, however, can be difficult to interpret. Rotated empirical orthogonal functions (REOFs) have been proposed as more physical entities with simpler patterns than EOFs. This study presents a new approach for finding climate patterns with simple structures that overcomes the problems encountered with rotation. The method achieves simplicity of the patterns by using the main properties of EOFs and REOFs simultaneously. Orthogonal patterns that maximise variance subject to a constraint that induces a form of simplicity are found. The simplified empirical orthogonal function (SEOF) patterns, being more 'local'. are constrained to have zero loadings outside the main centre of action. The method is applied to winter Northern Hemisphere (NH) monthly mean sea level pressure (SLP) reanalyses over the period 1948-2000. The 'simplified' leading patterns of variability are identified and compared to the leading patterns obtained from EOFs and REOFs. Copyright (C) 2005 Royal Meteorological Society.
Resumo:
Objective: Community-based care for mental disorders places considerable burden on families and carers. Measuring their experiences has become a priority, but there is no consensus on appropriate instruments. We aimed to review instruments carers consider relevant to their needs and assess evidence for their use. Method: A literature search was conducted for outcome measures used with mental health carers. Identified instruments were assessed for their relevance to the outcomes identified by carers and their psychometric properties. Results: Three hundred and ninety two published articles referring to 241 outcome measures were identified, 64 of which were eligible for review (used in three or more studies). Twenty-six instruments had good psychometric properties; they measured (i) carers' well-being, (ii) the experience of caregiving and (iii) carers' needs for professional support. Conclusion: Measures exist which have been used to assess the most salient aspects of carer outcome in mental health. All require further work to establish their psychometric properties fully.