888 resultados para locality algorithms
Resumo:
An Overview of known spatial clustering algorithms The space of interest can be the two-dimensional abstraction of the surface of the earth or a man-made space like the layout of a VLSI design, a volume containing a model of the human brain, or another 3d-space representing the arrangement of chains of protein molecules. The data consists of geometric information and can be either discrete or continuous. The explicit location and extension of spatial objects define implicit relations of spatial neighborhood (such as topological, distance and direction relations) which are used by spatial data mining algorithms. Therefore, spatial data mining algorithms are required for spatial characterization and spatial trend analysis. Spatial data mining or knowledge discovery in spatial databases differs from regular data mining in analogous with the differences between non-spatial data and spatial data. The attributes of a spatial object stored in a database may be affected by the attributes of the spatial neighbors of that object. In addition, spatial location, and implicit information about the location of an object, may be exactly the information that can be extracted through spatial data mining
Resumo:
We develop several algorithms for computations in Galois extensions of p-adic fields. Our algorithms are based on existing algorithms for number fields and are exact in the sense that we do not need to consider approximations to p-adic numbers. As an application we describe an algorithmic approach to prove or disprove various conjectures for local and global epsilon constants.
Resumo:
Data mining means to summarize information from large amounts of raw data. It is one of the key technologies in many areas of economy, science, administration and the internet. In this report we introduce an approach for utilizing evolutionary algorithms to breed fuzzy classifier systems. This approach was exercised as part of a structured procedure by the students Achler, Göb and Voigtmann as contribution to the 2006 Data-Mining-Cup contest, yielding encouragingly positive results.
Resumo:
In this report, we discuss the application of global optimization and Evolutionary Computation to distributed systems. We therefore selected and classified many publications, giving an insight into the wide variety of optimization problems which arise in distributed systems. Some interesting approaches from different areas will be discussed in greater detail with the use of illustrative examples.
Resumo:
Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.
Resumo:
In dieser Arbeit werden Algorithmen zur Untersuchung der äquivarianten Tamagawazahlvermutung von Burns und Flach entwickelt. Zunächst werden Algorithmen angegeben mit denen die lokale Fundamentalklasse, die globale Fundamentalklasse und Tates kanonische Klasse berechnet werden können. Dies ermöglicht unter anderem Berechnungen in Brauergruppen von Zahlkörpererweiterungen. Anschließend werden diese Algorithmen auf die Tamagawazahlvermutung angewendet. Die Epsilonkonstantenvermutung kann dadurch für alle Galoiserweiterungen L|K bewiesen werden, bei denen L in einer Galoiserweiterung E|Q vom Grad kleiner gleich 15 eingebettet werden kann. Für die Tamagawazahlvermutung an der Stelle 1 wird ein Algorithmus angegeben, der die Vermutung für ein gegebenes Fallbeispiel L|Q numerischen verifizieren kann. Im Spezialfall, dass alle Charaktere rational oder abelsch sind, kann dieser Algorithmus die Vermutung für L|Q sogar beweisen.
Resumo:
In dieser Dissertation werden Methoden zur optimalen Aufgabenverteilung in Multirobotersystemen (engl. Multi-Robot Task Allocation – MRTA) zur Inspektion von Industrieanlagen untersucht. MRTA umfasst die Verteilung und Ablaufplanung von Aufgaben für eine Gruppe von Robotern unter Berücksichtigung von operativen Randbedingungen mit dem Ziel, die Gesamteinsatzkosten zu minimieren. Dank zunehmendem technischen Fortschritt und sinkenden Technologiekosten ist das Interesse an mobilen Robotern für den Industrieeinsatz in den letzten Jahren stark gestiegen. Viele Arbeiten konzentrieren sich auf Probleme der Mobilität wie Selbstlokalisierung und Kartierung, aber nur wenige Arbeiten untersuchen die optimale Aufgabenverteilung. Da sich mit einer guten Aufgabenverteilung eine effizientere Planung erreichen lässt (z. B. niedrigere Kosten, kürzere Ausführungszeit), ist das Ziel dieser Arbeit die Entwicklung von Lösungsmethoden für das aus Inspektionsaufgaben mit Einzel- und Zweiroboteraufgaben folgende Such-/Optimierungsproblem. Ein neuartiger hybrider Genetischer Algorithmus wird vorgestellt, der einen teilbevölkerungbasierten Genetischen Algorithmus zur globalen Optimierung mit lokalen Suchheuristiken kombiniert. Zur Beschleunigung dieses Algorithmus werden auf die fittesten Individuen einer Generation lokale Suchoperatoren angewendet. Der vorgestellte Algorithmus verteilt die Aufgaben nicht nur einfach und legt den Ablauf fest, sondern er bildet auch temporäre Roboterverbünde für Zweiroboteraufgaben, wodurch räumliche und zeitliche Randbedingungen entstehen. Vier alternative Kodierungsstrategien werden für den vorgestellten Algorithmus entworfen: Teilaufgabenbasierte Kodierung: Hierdurch werden alle möglichen Lösungen abgedeckt, allerdings ist der Suchraum sehr groß. Aufgabenbasierte Kodierung: Zwei Möglichkeiten zur Zuweisung von Zweiroboteraufgaben wurden implementiert, um die Effizienz des Algorithmus zu steigern. Gruppierungsbasierte Kodierung: Zeitliche Randbedingungen zur Gruppierung von Aufgaben werden vorgestellt, um gute Lösungen innerhalb einer kleinen Anzahl von Generationen zu erhalten. Zwei Umsetzungsvarianten werden vorgestellt. Dekompositionsbasierte Kodierung: Drei geometrische Zerlegungen wurden entworfen, die Informationen über die räumliche Anordnung ausnutzen, um Probleme zu lösen, die Inspektionsgebiete mit rechteckigen Geometrien aufweisen. In Simulationsstudien wird die Leistungsfähigkeit der verschiedenen hybriden Genetischen Algorithmen untersucht. Dazu wurde die Inspektion von Tanklagern einer Erdölraffinerie mit einer Gruppe homogener Inspektionsroboter als Anwendungsfall gewählt. Die Simulationen zeigen, dass Kodierungsstrategien, die auf der geometrischen Zerlegung basieren, bei einer kleinen Anzahl an Generationen eine bessere Lösung finden können als die anderen untersuchten Strategien. Diese Arbeit beschäftigt sich mit Einzel- und Zweiroboteraufgaben, die entweder von einem einzelnen mobilen Roboter erledigt werden können oder die Zusammenarbeit von zwei Robotern erfordern. Eine Erweiterung des entwickelten Algorithmus zur Behandlung von Aufgaben, die mehr als zwei Roboter erfordern, ist möglich, würde aber die Komplexität der Optimierungsaufgabe deutlich vergrößern.
Resumo:
This thesis develops a model for the topological structure of situations. In this model, the topological structure of space is altered by the presence or absence of boundaries, such as those at the edges of objects. This allows the intuitive meaning of topological concepts such as region connectivity, function continuity, and preservation of topological structure to be modeled using the standard mathematical definitions. The thesis shows that these concepts are important in a wide range of artificial intelligence problems, including low-level vision, high-level vision, natural language semantics, and high-level reasoning.
Resumo:
The work described in this thesis began as an inquiry into the nature and use of optimization programs based on "genetic algorithms." That inquiry led, eventually, to three powerful heuristics that are broadly applicable in gradient-ascent programs: First, remember the locations of local maxima and restart the optimization program at a place distant from previously located local maxima. Second, adjust the size of probing steps to suit the local nature of the terrain, shrinking when probes do poorly and growing when probes do well. And third, keep track of the directions of recent successes, so as to probe preferentially in the direction of most rapid ascent. These algorithms lie at the core of a novel optimization program that illustrates the power to be had from deploying them together. The efficacy of this program is demonstrated on several test problems selected from a variety of fields, including De Jong's famous test-problem suite, the traveling salesman problem, the problem of coordinate registration for image guided surgery, the energy minimization problem for determining the shape of organic molecules, and the problem of assessing the structure of sedimentary deposits using seismic data.
Resumo:
Recent developments in the area of reinforcement learning have yielded a number of new algorithms for the prediction and control of Markovian environments. These algorithms, including the TD(lambda) algorithm of Sutton (1988) and the Q-learning algorithm of Watkins (1989), can be motivated heuristically as approximations to dynamic programming (DP). In this paper we provide a rigorous proof of convergence of these DP-based learning algorithms by relating them to the powerful techniques of stochastic approximation theory via a new convergence theorem. The theorem establishes a general class of convergent algorithms to which both TD(lambda) and Q-learning belong.
Resumo:
Modeling and predicting co-occurrences of events is a fundamental problem of unsupervised learning. In this contribution we develop a statistical framework for analyzing co-occurrence data in a general setting where elementary observations are joint occurrences of pairs of abstract objects from two finite sets. The main challenge for statistical models in this context is to overcome the inherent data sparseness and to estimate the probabilities for pairs which were rarely observed or even unobserved in a given sample set. Moreover, it is often of considerable interest to extract grouping structure or to find a hierarchical data organization. A novel family of mixture models is proposed which explain the observed data by a finite number of shared aspects or clusters. This provides a common framework for statistical inference and structure discovery and also includes several recently proposed models as special cases. Adopting the maximum likelihood principle, EM algorithms are derived to fit the model parameters. We develop improved versions of EM which largely avoid overfitting problems and overcome the inherent locality of EM--based optimization. Among the broad variety of possible applications, e.g., in information retrieval, natural language processing, data mining, and computer vision, we have chosen document retrieval, the statistical analysis of noun/adjective co-occurrence and the unsupervised segmentation of textured images to test and evaluate the proposed algorithms.
Resumo:
Bibliography: p. 22-24.
Resumo:
In this paper a novel methodology aimed at minimizing the probability of network failure and the failure impact (in terms of QoS degradation) while optimizing the resource consumption is introduced. A detailed study of MPLS recovery techniques and their GMPLS extensions are also presented. In this scenario, some features for reducing the failure impact and offering minimum failure probabilities at the same time are also analyzed. Novel two-step routing algorithms using this methodology are proposed. Results show that these methods offer high protection levels with optimal resource consumption
Resumo:
IP based networks still do not have the required degree of reliability required by new multimedia services, achieving such reliability will be crucial in the success or failure of the new Internet generation. Most of existing schemes for QoS routing do not take into consideration parameters concerning the quality of the protection, such as packet loss or restoration time. In this paper, we define a new paradigm to develop new protection strategies for building reliable MPLS networks, based on what we have called the network protection degree (NPD). This NPD consists of an a priori evaluation, the failure sensibility degree (FSD), which provides the failure probability and an a posteriori evaluation, the failure impact degree (FID), to determine the impact on the network in case of failure. Having mathematical formulated these components, we point out the most relevant components. Experimental results demonstrate the benefits of the utilization of the NPD, when used to enhance some current QoS routing algorithms to offer a certain degree of protection
Resumo:
This paper proposes a multicast implementation based on adaptive routing with anticipated calculation. Three different cost measures for a point-to-multipoint connection: bandwidth cost, connection establishment cost and switching cost can be considered. The application of the method based on pre-evaluated routing tables makes possible the reduction of bandwidth cost and connection establishment cost individually