958 resultados para A* search algorithm
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
We adapt the Shout and Act algorithm to Digital Objects Preservation where agents explore file systems looking for digital objects to be preserved (victims). When they find something they shout so that agent mates can hear it. The louder the shout, the urgent or most important the finding is. Louder shouts can also refer to closeness. We perform several experiments to show that this system works very scalably, showing that heterogeneous teams of agents outperform homogeneous ones over a wide range of tasks complexity. The target at-risk documents are MS Office documents (including an RTF file) with Excel content or in Excel format. Thus, an interesting conclusion from the experiments is that fewer heterogeneous (varying skills) agents can equal the performance of many homogeneous (combined super-skilled) agents, implying significant performance increases with lower overall cost growth. Our results impact the design of Digital Objects Preservation teams: a properly designed combination of heterogeneous teams is cheaper and more scalable when confronted with uncertain maps of digital objects that need to be preserved. A cost pyramid is proposed for engineers to use for modeling the most effective agent combinations
Resumo:
As wireless communications evolve towards heterogeneousnetworks, mobile terminals have been enabled tohandover seamlessly from one network to another. At the sametime, the continuous increase in the terminal power consumptionhas resulted in an ever-decreasing battery lifetime. To that end,the network selection is expected to play a key role on howto minimize the energy consumption, and thus to extend theterminal lifetime. Hitherto, terminals select the network thatprovides the highest received power. However, it has been provedthat this solution does not provide the highest energy efficiency.Thus, this paper proposes an energy efficient vertical handoveralgorithm that selects the most energy efficient network thatminimizes the uplink power consumption. The performance of theproposed algorithm is evaluated through extensive simulationsand it is shown to achieve high energy efficiency gains comparedto the conventional approach.
Resumo:
Des dels inicis dels ordinadors com a mquines programables, lhome ha intentat dotar-los de certa intelligncia per tal de pensar o raonar el ms semblant possible als humans. Un daquests intents ha sigut fer que la mquina sigui capa de pensar de tal manera que estudi jugades i guanyi partides descacs. En lactualitat amb els actuals sistemes multi tasca, orientat a objectes i accs a memria i grcies al potent hardware del que disposem, comptem amb una gran varietat de programes que es dediquen a jugar a escacs. Per no hi ha noms programes petits, hi ha fins i tot mquines senceres dedicades a calcular i estudiar jugades per tal de guanyar als millors jugadors del mn. Lobjectiu del meu treball s dur a terme un estudi i implementaci dun daquests programes, per aix es divideix en dues parts. La part terica o de lestudi, consta dun estudi dels sistemes dintelligncia artificial que es dediquen a jugar a escacs, estudi i cerca duna funci davaluaci vlida i estudi dels algorismes de cerca. La part prctica del treball es basa en la implementaci dun sistema intelligent capa de jugar a escacs amb certa lgica. Aquesta implementaci es porta a terme amb lajuda de les llibreries SDL, utilitzant lalgorisme minimax amb poda alfa-beta i codi c++. Com a conclusi del projecte magradaria remarcar que lestudi realitzat mha deixat veure que crear un joc descacs no era tan fcil com jo pensava per mha aportat la satisfacci daplicar tot el que he aprs durant la carrera i de descobrir moltes altres coses noves.
Resumo:
Metaheuristic methods have become increasingly popular approaches in solving global optimization problems. From a practical viewpoint, it is often desirable to perform multimodal optimization which, enables the search of more than one optimal solution to the task at hand. Population-based metaheuristic methods offer a natural basis for multimodal optimization. The topic has received increasing interest especially in the evolutionary computation community. Several niching approaches have been suggested to allow multimodal optimization using evolutionary algorithms. Most global optimization approaches, including metaheuristics, contain global and local search phases. The requirement to locate several optima sets additional requirements for the design of algorithms to be effective in both respects in the context of multimodal optimization. In this thesis, several different multimodal optimization algorithms are studied in regard to how their implementation in the global and local search phases affect their performance in different problems. The study concentrates especially on variations of the Differential Evolution algorithm and their capabilities in multimodal optimization. To separate the global and local search search phases, three multimodal optimization algorithms are proposed, two of which hybridize the Differential Evolution with a local search method. As the theoretical background behind the operation of metaheuristics is not generally thoroughly understood, the research relies heavily on experimental studies in finding out the properties of different approaches. To achieve reliable experimental information, the experimental environment must be carefully chosen to contain appropriate and adequately varying problems. The available selection of multimodal test problems is, however, rather limited, and no general framework exists. As a part of this thesis, such a framework for generating tunable test functions for evaluating different methods of multimodal optimization experimentally is provided and used for testing the algorithms. The results demonstrate that an efficient local phase is essential for creating efficient multimodal optimization algorithms. Adding a suitable global phase has the potential to boost the performance significantly, but the weak local phase may invalidate the advantages gained from the global phase.
Resumo:
In wireless communications the transmitted signals may be affected by noise. The receiver must decode the received message, which can be mathematically modelled as a search for the closest lattice point to a given vector. This problem is known to be NP-hard in general, but for communications applications there exist algorithms that, for a certain range of system parameters, offer polynomial expected complexity. The purpose of the thesis is to study the sphere decoding algorithm introduced in the article On Maximum-Likelihood Detection and the Search for the Closest Lattice Point, which was published by M.O. Damen, H. El Gamal and G. Caire in 2003. We concentrate especially on its computational complexity when used in spacetime coding. Computer simulations are used to study how different system parameters affect the computational complexity of the algorithm. The aim is to find ways to improve the algorithm from the complexity point of view. The main contribution of the thesis is the construction of two new modifications to the sphere decoding algorithm, which are shown to perform faster than the original algorithm within a range of system parameters.
Resumo:
The aim of this study is to investigate the consumer search behavior in high involvement purchases. The results of this research provide the descriptive analysis of the information search phase which is a part of the decision-making process. The study focuses on customers choice of the information sources, motivation behind it and different factors that influence the search behavior. Particular attention is paid to the purchase categorization and the differences in information search between products and services. The qualitative research method is chosen for this study. The data is gathered through ten theme interviews. Each participant of the interview describes his/her own search behavior in a product and a service case. The results indicate that consumer search behavior vary according to the purchase categorization, demographic, individual and situational factors. Moreover, the above-mentioned factors influence the purpose and position of the information search phase in a five-step decision making model.
Resumo:
In the Russian Wholesale Market, electricity and capacity are traded separately. Capacity is a special good, the sale of which obliges suppliers to keep their generating equipment ready to produce the quantity of electricity indicated by the System Operator. The purpose of the formation of capacity trading was the maintenance of reliable and uninterrupted delivery of electricity in the wholesale market. The price of capacity reflects constant investments in construction, modernization and maintenance of power plants. So, the capacity sale creates favorable conditions to attract investments in the energy sector because it guarantees the investor that his investments will be returned.
Resumo:
The present text proposes a discussion on the concept of true friendship. The argument is grounded mostly on Aristotle's Nicomachean Ethics, Owen Flanagan's ethics as human ecology, and on contemporary authors' works about the Greek philosopher's concept of friendship. Given that human beings flourish through 1) exercising capacities, 2) being moral, and 3) having true friendships, difficulties to establish the level of trust required by true friendships turns the search itself (for them) morally valid.
Resumo:
In this work a fuzzy linear system is used to solve Leontief input-output model with fuzzy entries. For solving this model, we assume that the consumption matrix from di erent sectors of the economy and demand are known. These assumptions heavily depend on the information obtained from the industries. Hence uncertainties are involved in this information. The aim of this work is to model these uncertainties and to address them by fuzzy entries such as fuzzy numbers and LR-type fuzzy numbers (triangular and trapezoidal). Fuzzy linear system has been developed using fuzzy data and it is solved using Gauss-Seidel algorithm. Numerical examples show the e ciency of this algorithm. The famous example from Prof. Leontief, where he solved the production levels for U.S. economy in 1958, is also further analyzed.
Resumo:
Traditionally simulators have been used extensively in robotics to develop robotic systems without the need to build expensive hardware. However, simulators can be also be used as a memoryfor a robot. This allows the robot to try out actions in simulation before executing them for real. The key obstacle to this approach is an uncertainty of knowledge about the environment. The goal of the Masters Thesis work was to develop a method, which allows updating the simulation model based on actual measurements to achieve a success of the planned task. OpenRAVE was chosen as an experimental simulation environment on planning,trial and update stages. Steepest Descent algorithm in conjunction with Golden Section search procedure form the principle part of optimization process. During experiments, the properties of the proposed method, such as sensitivity to different parameters, including gradient and error function, were examined. The limitations of the approach were established, based on analyzing the regions of convergence.
Resumo:
I doktorsavhandlingen undersks frmgan att lsa hos ett antal lsare fr optimeringsproblem och ett antal svrigheter med att gra en rttvis lsarjmfrelse avsljas. Dessutom framlggs ngra frbttringar som utfrts p en av lsarna som heter GAMS/AlphaECP. Optimering innebr, i det hr sammanhanget, att finna den bsta mjliga lsningen p ett problem. Den underskta klassen av problem kan karaktriseras som svrlst och frekommer inom ett flertal industriomrden. Mlet har varit att underska om det finns en lsare som r universellt snabbare och hittar lsningar med hgre kvalitet n ngon av de andra lsarna. Det kommersiella optimeringssystemet GAMS (General Algebraic Modeling System) och omfattande problembibliotek har anvnts fr att jmfra lsare. Frbttringarna som presenterats har utfrts p GAMS/AlphaECP lsaren som baserar sig p skrplansmetoden Extended Cutting Plane (ECP). ECP-metoden har utvecklats frmst av professor Tapio Westerlund p Anlggnings- och systemteknik vid bo Akademi.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.