896 resultados para Exact Algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Extensive use of the Internet coupled with the marvelous growth in e-commerce and m-commerce has created a huge demand for information security. The Secure Socket Layer (SSL) protocol is the most widely used security protocol in the Internet which meets this demand. It provides protection against eaves droppings, tampering and forgery. The cryptographic algorithms RC4 and HMAC have been in use for achieving security services like confidentiality and authentication in the SSL. But recent attacks against RC4 and HMAC have raised questions in the confidence on these algorithms. Hence two novel cryptographic algorithms MAJE4 and MACJER-320 have been proposed as substitutes for them. The focus of this work is to demonstrate the performance of these new algorithms and suggest them as dependable alternatives to satisfy the need of security services in SSL. The performance evaluation has been done by using practical implementation method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An Overview of known spatial clustering algorithms The space of interest can be the two-dimensional abstraction of the surface of the earth or a man-made space like the layout of a VLSI design, a volume containing a model of the human brain, or another 3d-space representing the arrangement of chains of protein molecules. The data consists of geometric information and can be either discrete or continuous. The explicit location and extension of spatial objects define implicit relations of spatial neighborhood (such as topological, distance and direction relations) which are used by spatial data mining algorithms. Therefore, spatial data mining algorithms are required for spatial characterization and spatial trend analysis. Spatial data mining or knowledge discovery in spatial databases differs from regular data mining in analogous with the differences between non-spatial data and spatial data. The attributes of a spatial object stored in a database may be affected by the attributes of the spatial neighbors of that object. In addition, spatial location, and implicit information about the location of an object, may be exactly the information that can be extracted through spatial data mining

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data mining means to summarize information from large amounts of raw data. It is one of the key technologies in many areas of economy, science, administration and the internet. In this report we introduce an approach for utilizing evolutionary algorithms to breed fuzzy classifier systems. This approach was exercised as part of a structured procedure by the students Achler, Göb and Voigtmann as contribution to the 2006 Data-Mining-Cup contest, yielding encouragingly positive results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate for very general cases the multiplet and fine structure splitting of muonelectron atoms arising from the coupling of the electron and muon angular momenta, including the effect of the Breit operator plus the electron state-dependent screening. Although many conditions have to be fulfilled simultaneously to observe these effeets, it should be possible to measure them in the 6h- 5g muonic transition in the Sn region.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this report, we discuss the application of global optimization and Evolutionary Computation to distributed systems. We therefore selected and classified many publications, giving an insight into the wide variety of optimization problems which arise in distributed systems. Some interesting approaches from different areas will be discussed in greater detail with the use of illustrative examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of the relevance and the usefulness of extracted association rules is of primary importance because, in the majority of cases, real-life databases lead to several thousands association rules with high confidence and among which are many redundancies. Using the closure of the Galois connection, we define two new bases for association rules which union is a generating set for all valid association rules with support and confidence. These bases are characterized using frequent closed itemsets and their generators; they consist of the non-redundant exact and approximate association rules having minimal antecedents and maximal consequences, i.e. the most relevant association rules. Algorithms for extracting these bases are presented and results of experiments carried out on real-life databases show that the proposed bases are useful, and that their generation is not time consuming.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In dieser Arbeit werden Algorithmen zur Untersuchung der äquivarianten Tamagawazahlvermutung von Burns und Flach entwickelt. Zunächst werden Algorithmen angegeben mit denen die lokale Fundamentalklasse, die globale Fundamentalklasse und Tates kanonische Klasse berechnet werden können. Dies ermöglicht unter anderem Berechnungen in Brauergruppen von Zahlkörpererweiterungen. Anschließend werden diese Algorithmen auf die Tamagawazahlvermutung angewendet. Die Epsilonkonstantenvermutung kann dadurch für alle Galoiserweiterungen L|K bewiesen werden, bei denen L in einer Galoiserweiterung E|Q vom Grad kleiner gleich 15 eingebettet werden kann. Für die Tamagawazahlvermutung an der Stelle 1 wird ein Algorithmus angegeben, der die Vermutung für ein gegebenes Fallbeispiel L|Q numerischen verifizieren kann. Im Spezialfall, dass alle Charaktere rational oder abelsch sind, kann dieser Algorithmus die Vermutung für L|Q sogar beweisen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In dieser Dissertation werden Methoden zur optimalen Aufgabenverteilung in Multirobotersystemen (engl. Multi-Robot Task Allocation – MRTA) zur Inspektion von Industrieanlagen untersucht. MRTA umfasst die Verteilung und Ablaufplanung von Aufgaben für eine Gruppe von Robotern unter Berücksichtigung von operativen Randbedingungen mit dem Ziel, die Gesamteinsatzkosten zu minimieren. Dank zunehmendem technischen Fortschritt und sinkenden Technologiekosten ist das Interesse an mobilen Robotern für den Industrieeinsatz in den letzten Jahren stark gestiegen. Viele Arbeiten konzentrieren sich auf Probleme der Mobilität wie Selbstlokalisierung und Kartierung, aber nur wenige Arbeiten untersuchen die optimale Aufgabenverteilung. Da sich mit einer guten Aufgabenverteilung eine effizientere Planung erreichen lässt (z. B. niedrigere Kosten, kürzere Ausführungszeit), ist das Ziel dieser Arbeit die Entwicklung von Lösungsmethoden für das aus Inspektionsaufgaben mit Einzel- und Zweiroboteraufgaben folgende Such-/Optimierungsproblem. Ein neuartiger hybrider Genetischer Algorithmus wird vorgestellt, der einen teilbevölkerungbasierten Genetischen Algorithmus zur globalen Optimierung mit lokalen Suchheuristiken kombiniert. Zur Beschleunigung dieses Algorithmus werden auf die fittesten Individuen einer Generation lokale Suchoperatoren angewendet. Der vorgestellte Algorithmus verteilt die Aufgaben nicht nur einfach und legt den Ablauf fest, sondern er bildet auch temporäre Roboterverbünde für Zweiroboteraufgaben, wodurch räumliche und zeitliche Randbedingungen entstehen. Vier alternative Kodierungsstrategien werden für den vorgestellten Algorithmus entworfen: Teilaufgabenbasierte Kodierung: Hierdurch werden alle möglichen Lösungen abgedeckt, allerdings ist der Suchraum sehr groß. Aufgabenbasierte Kodierung: Zwei Möglichkeiten zur Zuweisung von Zweiroboteraufgaben wurden implementiert, um die Effizienz des Algorithmus zu steigern. Gruppierungsbasierte Kodierung: Zeitliche Randbedingungen zur Gruppierung von Aufgaben werden vorgestellt, um gute Lösungen innerhalb einer kleinen Anzahl von Generationen zu erhalten. Zwei Umsetzungsvarianten werden vorgestellt. Dekompositionsbasierte Kodierung: Drei geometrische Zerlegungen wurden entworfen, die Informationen über die räumliche Anordnung ausnutzen, um Probleme zu lösen, die Inspektionsgebiete mit rechteckigen Geometrien aufweisen. In Simulationsstudien wird die Leistungsfähigkeit der verschiedenen hybriden Genetischen Algorithmen untersucht. Dazu wurde die Inspektion von Tanklagern einer Erdölraffinerie mit einer Gruppe homogener Inspektionsroboter als Anwendungsfall gewählt. Die Simulationen zeigen, dass Kodierungsstrategien, die auf der geometrischen Zerlegung basieren, bei einer kleinen Anzahl an Generationen eine bessere Lösung finden können als die anderen untersuchten Strategien. Diese Arbeit beschäftigt sich mit Einzel- und Zweiroboteraufgaben, die entweder von einem einzelnen mobilen Roboter erledigt werden können oder die Zusammenarbeit von zwei Robotern erfordern. Eine Erweiterung des entwickelten Algorithmus zur Behandlung von Aufgaben, die mehr als zwei Roboter erfordern, ist möglich, würde aber die Komplexität der Optimierungsaufgabe deutlich vergrößern.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis develops a model for the topological structure of situations. In this model, the topological structure of space is altered by the presence or absence of boundaries, such as those at the edges of objects. This allows the intuitive meaning of topological concepts such as region connectivity, function continuity, and preservation of topological structure to be modeled using the standard mathematical definitions. The thesis shows that these concepts are important in a wide range of artificial intelligence problems, including low-level vision, high-level vision, natural language semantics, and high-level reasoning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The work described in this thesis began as an inquiry into the nature and use of optimization programs based on "genetic algorithms." That inquiry led, eventually, to three powerful heuristics that are broadly applicable in gradient-ascent programs: First, remember the locations of local maxima and restart the optimization program at a place distant from previously located local maxima. Second, adjust the size of probing steps to suit the local nature of the terrain, shrinking when probes do poorly and growing when probes do well. And third, keep track of the directions of recent successes, so as to probe preferentially in the direction of most rapid ascent. These algorithms lie at the core of a novel optimization program that illustrates the power to be had from deploying them together. The efficacy of this program is demonstrated on several test problems selected from a variety of fields, including De Jong's famous test-problem suite, the traveling salesman problem, the problem of coordinate registration for image guided surgery, the energy minimization problem for determining the shape of organic molecules, and the problem of assessing the structure of sedimentary deposits using seismic data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a framework for learning in hidden Markov models with distributed state representations. Within this framework, we derive a learning algorithm based on the Expectation--Maximization (EM) procedure for maximum likelihood estimation. Analogous to the standard Baum-Welch update rules, the M-step of our algorithm is exact and can be solved analytically. However, due to the combinatorial nature of the hidden state representation, the exact E-step is intractable. A simple and tractable mean field approximation is derived. Empirical results on a set of problems suggest that both the mean field approximation and Gibbs sampling are viable alternatives to the computationally expensive exact algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent developments in the area of reinforcement learning have yielded a number of new algorithms for the prediction and control of Markovian environments. These algorithms, including the TD(lambda) algorithm of Sutton (1988) and the Q-learning algorithm of Watkins (1989), can be motivated heuristically as approximations to dynamic programming (DP). In this paper we provide a rigorous proof of convergence of these DP-based learning algorithms by relating them to the powerful techniques of stochastic approximation theory via a new convergence theorem. The theorem establishes a general class of convergent algorithms to which both TD(lambda) and Q-learning belong.