912 resultados para Simulated annealing algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Magnetism and magnetic materials have been playing a lead role in improving the quality of life. They are increasingly being used in a wide variety of applications ranging from compasses to modern technological devices. Metallic glasses occupy an important position among magnetic materials. They assume importance both from a scientific and an application point of view since they represent an amorphous form of condensed matter with significant deviation from thermodynamic equilibrium. Metallic glasses having good soft magnetic properties are widely used in tape recorder heads, cores of high-power transformers and metallic shields. Superconducting metallic glasses are being used to produce high magnetic fields and magnetic levitation effect. Upon heat treatment, they undergo structural relaxation leading to subtle rearrangements of constituent atoms. This leads to densification of amorphous phase and subsequent nanocrystallisation. The short-range structural relaxation phenomenon gives rise to significant variations in physical, mechanical and magnetic properties. Magnetic amorphous alloys of Co-Fe exhibit excellent soft magnetic properties which make them promising candidates for applications as transformer cores, sensors, and actuators. With the advent of microminiaturization and nanotechnology, thin film forms of these alloys are sought after for soft under layers for perpendicular recording media. The thin film forms of these alloys can also be used for fabrication of magnetic micro electro mechanical systems (magnetic MEMS). In bulk, they are drawn in the form of ribbons, often by melt spinning. The main constituents of these alloys are Co, Fe, Ni, Si, Mo and B. Mo acts as the grain growth inhibitor and Si and B facilitate the amorphous nature in the alloy structure. The ferromagnetic phases such as Co-Fe and Fe-Ni in the alloy composition determine the soft magnetic properties. The grain correlation length, a measure of the grain size, often determines the soft magnetic properties of these alloys. Amorphous alloys could be restructured in to their nanocrystalline counterparts by different techniques. The structure of nanocrystalline material consists of nanosized ferromagnetic crystallites embedded in an amorphous matrix. When the amorphous phase is ferromagnetic, they facilitate exchange coupling between nanocrystallites. This exchange coupling results in the vanishing of magnetocrystalline anisotropy which improves the soft magnetic properties. From a fundamental perspective, exchange correlation length and grain size are the deciding factors that determine the magnetic properties of these nanocrystalline materials. In thin films, surfaces and interfaces predominantly decides the bulk property and hence tailoring the surface roughness and morphology of the film could result in modified magnetic properties. Surface modifications can be achieved by thermal annealing at various temperatures. Ion irradiation is an alternative tool to modify the surface/structural properties. The surface evolution of a thin film under swift heavy ion (SHI) irradiation is an outcome of different competing mechanism. It could be sputtering induced by SHI followed by surface roughening process and the material transport induced smoothening process. The impingement of ions with different fluence on the alloy is bound to produce systematic microstructural changes and this could effectively be used for tailoring magnetic parameters namely coercivity, saturation magnetization, magnetic permeability and remanence of these materials. Swift heavy ion irradiation is a novel and an ingenious tool for surface modification which eventually will lead to changes in the bulk as well as surface magnetic property. SHI has been widely used as a method for the creation of latent tracks in thin films. The bombardment of SHI modifies the surfaces or interfaces or creates defects, which induces strain in the film. These changes will have profound influence on the magnetic anisotropy and the magnetisation of the specimen. Thus inducing structural and morphological changes by thermal annealing and swift heavy ion irradiation, which in turn induce changes in the magnetic properties of these alloys, is one of the motivation of this study. Multiferroic and magneto-electrics is a class of functional materials with wide application potential and are of great interest to material scientists and engineers. Magnetoelectric materials combine both magnetic as well as ferroelectric properties in a single specimen. The dielectric properties of such materials can be controlled by the application of an external magnetic field and the magnetic properties by an electric field. Composites with magnetic and piezo/ferroelectric individual phases are found to have strong magnetoelectric (ME) response at room temperature and hence are preferred to single phasic multiferroic materials. Currently research in this class of materials is towards optimization of the ME coupling by tailoring the piezoelectric and magnetostrictive properties of the two individual components of ME composites. The magnetoelectric coupling constant (MECC) (_ ME) is the parameter that decides the extent of interdependence of magnetic and electric response of the composite structure. Extensive investigates have been carried out in bulk composites possessing on giant ME coupling. These materials are fabricated by either gluing the individual components to each other or mixing the magnetic material to a piezoelectric matrix. The most extensively investigated material combinations are Lead Zirconate Titanate (PZT) or Lead Magnesium Niobate-Lead Titanate (PMNPT) as the piezoelectric, and Terfenol-D as the magnetostrictive phase and the coupling is measured in different configurations like transverse, longitudinal and inplane longitudinal. Fabrication of a lead free multiferroic composite with a strong ME response is the need of the hour from a device application point of view. The multilayer structure is expected to be far superior to bulk composites in terms of ME coupling since the piezoelectric (PE) layer can easily be poled electrically to enhance the piezoelectricity and hence the ME effect. The giant magnetostriction reported in the Co-Fe thin films makes it an ideal candidate for the ferromagnetic component and BaTiO3 which is a well known ferroelectric material with improved piezoelectric properties as the ferroelectric component. The multilayer structure of BaTiO3- CoFe- BaTiO3 is an ideal system to understand the underlying fundamental physics behind the ME coupling mechanism. Giant magnetoelectric coupling coefficient is anticipated for these multilayer structures of BaTiO3-CoFe-BaTiO3. This makes it an ideal candidate for cantilever applications in magnetic MEMS/NEMS devices. SrTiO3 is an incipient ferroelectric material which is paraelectric up to 0K in its pure unstressed form. Recently few studies showed that ferroelectricity can be induced by application of stress or by chemical / isotopic substitution. The search for room temperature magnetoelectric coupling in SrTiO3-CoFe-SrTiO3 multilayer structures is of fundamental interest. Yet another motivation of the present work is to fabricate multilayer structures consisting of CoFe/ BaTiO3 and CoFe/ SrTiO3 for possible giant ME coupling coefficient (MECC) values. These are lead free and hence promising candidates for MEMS applications. The elucidation of mechanism for the giant MECC also will be the part of the objective of this investigation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop several algorithms for computations in Galois extensions of p-adic fields. Our algorithms are based on existing algorithms for number fields and are exact in the sense that we do not need to consider approximations to p-adic numbers. As an application we describe an algorithmic approach to prove or disprove various conjectures for local and global epsilon constants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data mining means to summarize information from large amounts of raw data. It is one of the key technologies in many areas of economy, science, administration and the internet. In this report we introduce an approach for utilizing evolutionary algorithms to breed fuzzy classifier systems. This approach was exercised as part of a structured procedure by the students Achler, Göb and Voigtmann as contribution to the 2006 Data-Mining-Cup contest, yielding encouragingly positive results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The TRIM.SP program which is based on the binary collision approximation was changed to handle not only repulsive interaction potentials, but also potentials with an attractive part. Sputtering yields, average depth and reflection coefficients calculated with four different potentials are compared. Three purely repulsive potentials (Meliere, Kr-C and ZBL) are used and an ab initio pair potential, which is especially calculated for silicon bombardment by silicon. The general trends in the calculated results are similar for all potentials applied, but differences between the repulsive potentials and the ab initio potential occur for the reflection coefficients and the sputtering yield at large angles of incidence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In dieser Arbeit werden Algorithmen zur Untersuchung der äquivarianten Tamagawazahlvermutung von Burns und Flach entwickelt. Zunächst werden Algorithmen angegeben mit denen die lokale Fundamentalklasse, die globale Fundamentalklasse und Tates kanonische Klasse berechnet werden können. Dies ermöglicht unter anderem Berechnungen in Brauergruppen von Zahlkörpererweiterungen. Anschließend werden diese Algorithmen auf die Tamagawazahlvermutung angewendet. Die Epsilonkonstantenvermutung kann dadurch für alle Galoiserweiterungen L|K bewiesen werden, bei denen L in einer Galoiserweiterung E|Q vom Grad kleiner gleich 15 eingebettet werden kann. Für die Tamagawazahlvermutung an der Stelle 1 wird ein Algorithmus angegeben, der die Vermutung für ein gegebenes Fallbeispiel L|Q numerischen verifizieren kann. Im Spezialfall, dass alle Charaktere rational oder abelsch sind, kann dieser Algorithmus die Vermutung für L|Q sogar beweisen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In dieser Dissertation werden Methoden zur optimalen Aufgabenverteilung in Multirobotersystemen (engl. Multi-Robot Task Allocation – MRTA) zur Inspektion von Industrieanlagen untersucht. MRTA umfasst die Verteilung und Ablaufplanung von Aufgaben für eine Gruppe von Robotern unter Berücksichtigung von operativen Randbedingungen mit dem Ziel, die Gesamteinsatzkosten zu minimieren. Dank zunehmendem technischen Fortschritt und sinkenden Technologiekosten ist das Interesse an mobilen Robotern für den Industrieeinsatz in den letzten Jahren stark gestiegen. Viele Arbeiten konzentrieren sich auf Probleme der Mobilität wie Selbstlokalisierung und Kartierung, aber nur wenige Arbeiten untersuchen die optimale Aufgabenverteilung. Da sich mit einer guten Aufgabenverteilung eine effizientere Planung erreichen lässt (z. B. niedrigere Kosten, kürzere Ausführungszeit), ist das Ziel dieser Arbeit die Entwicklung von Lösungsmethoden für das aus Inspektionsaufgaben mit Einzel- und Zweiroboteraufgaben folgende Such-/Optimierungsproblem. Ein neuartiger hybrider Genetischer Algorithmus wird vorgestellt, der einen teilbevölkerungbasierten Genetischen Algorithmus zur globalen Optimierung mit lokalen Suchheuristiken kombiniert. Zur Beschleunigung dieses Algorithmus werden auf die fittesten Individuen einer Generation lokale Suchoperatoren angewendet. Der vorgestellte Algorithmus verteilt die Aufgaben nicht nur einfach und legt den Ablauf fest, sondern er bildet auch temporäre Roboterverbünde für Zweiroboteraufgaben, wodurch räumliche und zeitliche Randbedingungen entstehen. Vier alternative Kodierungsstrategien werden für den vorgestellten Algorithmus entworfen: Teilaufgabenbasierte Kodierung: Hierdurch werden alle möglichen Lösungen abgedeckt, allerdings ist der Suchraum sehr groß. Aufgabenbasierte Kodierung: Zwei Möglichkeiten zur Zuweisung von Zweiroboteraufgaben wurden implementiert, um die Effizienz des Algorithmus zu steigern. Gruppierungsbasierte Kodierung: Zeitliche Randbedingungen zur Gruppierung von Aufgaben werden vorgestellt, um gute Lösungen innerhalb einer kleinen Anzahl von Generationen zu erhalten. Zwei Umsetzungsvarianten werden vorgestellt. Dekompositionsbasierte Kodierung: Drei geometrische Zerlegungen wurden entworfen, die Informationen über die räumliche Anordnung ausnutzen, um Probleme zu lösen, die Inspektionsgebiete mit rechteckigen Geometrien aufweisen. In Simulationsstudien wird die Leistungsfähigkeit der verschiedenen hybriden Genetischen Algorithmen untersucht. Dazu wurde die Inspektion von Tanklagern einer Erdölraffinerie mit einer Gruppe homogener Inspektionsroboter als Anwendungsfall gewählt. Die Simulationen zeigen, dass Kodierungsstrategien, die auf der geometrischen Zerlegung basieren, bei einer kleinen Anzahl an Generationen eine bessere Lösung finden können als die anderen untersuchten Strategien. Diese Arbeit beschäftigt sich mit Einzel- und Zweiroboteraufgaben, die entweder von einem einzelnen mobilen Roboter erledigt werden können oder die Zusammenarbeit von zwei Robotern erfordern. Eine Erweiterung des entwickelten Algorithmus zur Behandlung von Aufgaben, die mehr als zwei Roboter erfordern, ist möglich, würde aber die Komplexität der Optimierungsaufgabe deutlich vergrößern.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis develops a model for the topological structure of situations. In this model, the topological structure of space is altered by the presence or absence of boundaries, such as those at the edges of objects. This allows the intuitive meaning of topological concepts such as region connectivity, function continuity, and preservation of topological structure to be modeled using the standard mathematical definitions. The thesis shows that these concepts are important in a wide range of artificial intelligence problems, including low-level vision, high-level vision, natural language semantics, and high-level reasoning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The work described in this thesis began as an inquiry into the nature and use of optimization programs based on "genetic algorithms." That inquiry led, eventually, to three powerful heuristics that are broadly applicable in gradient-ascent programs: First, remember the locations of local maxima and restart the optimization program at a place distant from previously located local maxima. Second, adjust the size of probing steps to suit the local nature of the terrain, shrinking when probes do poorly and growing when probes do well. And third, keep track of the directions of recent successes, so as to probe preferentially in the direction of most rapid ascent. These algorithms lie at the core of a novel optimization program that illustrates the power to be had from deploying them together. The efficacy of this program is demonstrated on several test problems selected from a variety of fields, including De Jong's famous test-problem suite, the traveling salesman problem, the problem of coordinate registration for image guided surgery, the energy minimization problem for determining the shape of organic molecules, and the problem of assessing the structure of sedimentary deposits using seismic data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One objective of artificial intelligence is to model the behavior of an intelligent agent interacting with its environment. The environment's transformations can be modeled as a Markov chain, whose state is partially observable to the agent and affected by its actions; such processes are known as partially observable Markov decision processes (POMDPs). While the environment's dynamics are assumed to obey certain rules, the agent does not know them and must learn. In this dissertation we focus on the agent's adaptation as captured by the reinforcement learning framework. This means learning a policy---a mapping of observations into actions---based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. The set of policies is constrained by the architecture of the agent's controller. POMDPs require a controller to have a memory. We investigate controllers with memory, including controllers with external memory, finite state controllers and distributed controllers for multi-agent systems. For these various controllers we work out the details of the algorithms which learn by ascending the gradient of expected cumulative reinforcement. Building on statistical learning theory and experiment design theory, a policy evaluation algorithm is developed for the case of experience re-use. We address the question of sufficient experience for uniform convergence of policy evaluation and obtain sample complexity bounds for various estimators. Finally, we demonstrate the performance of the proposed algorithms on several domains, the most complex of which is simulated adaptive packet routing in a telecommunication network.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent developments in the area of reinforcement learning have yielded a number of new algorithms for the prediction and control of Markovian environments. These algorithms, including the TD(lambda) algorithm of Sutton (1988) and the Q-learning algorithm of Watkins (1989), can be motivated heuristically as approximations to dynamic programming (DP). In this paper we provide a rigorous proof of convergence of these DP-based learning algorithms by relating them to the powerful techniques of stochastic approximation theory via a new convergence theorem. The theorem establishes a general class of convergent algorithms to which both TD(lambda) and Q-learning belong.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bibliography: p. 22-24.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper a novel methodology aimed at minimizing the probability of network failure and the failure impact (in terms of QoS degradation) while optimizing the resource consumption is introduced. A detailed study of MPLS recovery techniques and their GMPLS extensions are also presented. In this scenario, some features for reducing the failure impact and offering minimum failure probabilities at the same time are also analyzed. Novel two-step routing algorithms using this methodology are proposed. Results show that these methods offer high protection levels with optimal resource consumption

Relevância:

20.00% 20.00%

Publicador:

Resumo:

IP based networks still do not have the required degree of reliability required by new multimedia services, achieving such reliability will be crucial in the success or failure of the new Internet generation. Most of existing schemes for QoS routing do not take into consideration parameters concerning the quality of the protection, such as packet loss or restoration time. In this paper, we define a new paradigm to develop new protection strategies for building reliable MPLS networks, based on what we have called the network protection degree (NPD). This NPD consists of an a priori evaluation, the failure sensibility degree (FSD), which provides the failure probability and an a posteriori evaluation, the failure impact degree (FID), to determine the impact on the network in case of failure. Having mathematical formulated these components, we point out the most relevant components. Experimental results demonstrate the benefits of the utilization of the NPD, when used to enhance some current QoS routing algorithms to offer a certain degree of protection

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reinforcement learning (RL) is a very suitable technique for robot learning, as it can learn in unknown environments and in real-time computation. The main difficulties in adapting classic RL algorithms to robotic systems are the generalization problem and the correct observation of the Markovian state. This paper attempts to solve the generalization problem by proposing the semi-online neural-Q_learning algorithm (SONQL). The algorithm uses the classic Q_learning technique with two modifications. First, a neural network (NN) approximates the Q_function allowing the use of continuous states and actions. Second, a database of the most representative learning samples accelerates and stabilizes the convergence. The term semi-online is referred to the fact that the algorithm uses the current but also past learning samples. However, the algorithm is able to learn in real-time while the robot is interacting with the environment. The paper shows simulated results with the "mountain-car" benchmark and, also, real results with an underwater robot in a target following behavior