10 resultados para InterNet (Computation nets)
em Universitätsbibliothek Kassel, Universität Kassel, Germany
Resumo:
We show that the locally free class group of an order in a semisimple algebra over a number field is isomorphic to a certain ray class group. This description is then used to present an algorithm that computes the locally free class group. The algorithm is implemented in MAGMA for the case where the algebra is a group ring over the rational numbers.
Resumo:
In this paper we derive an identity for the Fourier coefficients of a differentiable function f(t) in terms of the Fourier coefficients of its derivative f'(t). This yields an algorithm to compute the Fourier coefficients of f(t) whenever the Fourier coefficients of f'(t) are known, and vice versa. Furthermore this generates an iterative scheme for N times differentiable functions complementing the direct computation of Fourier coefficients via the defining integrals which can be also treated automatically in certain cases.
Resumo:
Data mining means to summarize information from large amounts of raw data. It is one of the key technologies in many areas of economy, science, administration and the internet. In this report we introduce an approach for utilizing evolutionary algorithms to breed fuzzy classifier systems. This approach was exercised as part of a structured procedure by the students Achler, Göb and Voigtmann as contribution to the 2006 Data-Mining-Cup contest, yielding encouragingly positive results.
Resumo:
A program is presented for the construction of relativistic symmetry-adapted molecular basis functions. It is applicable to 36 finite double point groups. The algorithm, based on the projection operator method, automatically generates linearly independent basis sets. Time reversal invariance is included in the program, leading to additional selection rules in the non-relativistic limit.
Resumo:
We present a new algorithm called TITANIC for computing concept lattices. It is based on data mining techniques for computing frequent itemsets. The algorithm is experimentally evaluated and compared with B. Ganter's Next-Closure algorithm.
Resumo:
Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.
Resumo:
KAAD (Katholischer Akademischer Ausländer-Dienst)
Resumo:
Der Nationalsozialismus und damit auch der Holocaust gilt als die am besten erforschte Periode der deutschen Geschichte. Unzählige Berichte und Dokumente belegen den Völkermord an den europäischen Juden und ermöglichen so ein genaues und detailliertes Bild der Vorgänge. Trotz der sehr guten Quellenlage behaupten Holocaustleugner, dass es sich bei der Shoah um eine Inszenierung handele oder dass die geschätzten Opferzahlen als maßlose Übertreibung zurückzuweisen seien. Die vorliegende Studie untersucht, wie Holocaustleugner argumentieren und mit welchen Manipulationstechniken sie historische Tatsachen verfälschen. Im Zentrum stehen dabei propagandistische Texte im Internet, dem Medium, welches gegenwärtig als häufigster Verbreitungskanal für holocaustleugnende Propaganda genutzt wird. Um aktuelle Tendenzen deutlich zu machen und um Brüche und Kontinuitäten herauszuarbeiten, werden jüngste Internet-Publikationen mit Printmedien aus den 1970er und 1980er Jahren verglichen. Die Analyse macht dabei deutlich, dass sich holocaustleugnende Argumentationsmuster mit der „digitalen Revolution“ gewandelt haben und die Protagonisten der Szene sich auf neue Zielgruppen einstellen. Während frühe Printmedien vor allem für einen begrenzten Kreis einschlägig Interessierter publiziert wurden, haben Holocaustleugner heute die Gesamtheit der Internet-Nutzer als Zielgruppe erkannt. Vor diesem Hintergrund wandeln sich die Verschleierungstaktiken und Täuschungsmanöver, auch aber der Habitus der Texte. Argumentierten die Autoren in früheren Veröffentlichungen oftmals offensiv und radikal, konzentrieren sie sich gegenwärtig auf moderatere Argumentationsmuster, die darauf abzielen, die Shoah zu trivialisieren und zu minimieren. Derartige Propagandaformen sind kompatibler mit dem politischem Mainstream, weil sie weniger verschwörungstheoretisch angelegt sind und ihr antisemitisches Motiv besser verbergen können. Radikale Holocaustleugnung, die behauptet, der gesamte wissenschaftliche Erkenntnisbestand zur Shoah sei ein Phantasiegebilde, findet sich seltener im Internet. Häufiger wird eine „Nadelstich-Taktik“ verfolgt, die einzelne Detailaspekte aufgreift, in Frage stellt oder zu widerlegen vorgibt. Diese Angriffe sollen ihre Wirkung nicht für sich entfalten, sondern in der Summe suggerieren, dass die Tatsachenbasis des Holocaust durchaus hinterfragenswert sei.
Resumo:
Despite its young history, Computer Science Education has seen a number of "revolutions". Being a veteran in the field, the author reflects on the many changes he has seen in computing and its teaching. The intent of this personal collection is to point out that most revolutions came unforeseen and that many of the new learning initiatives, despite high financial input, ultimately failed. The author then considers the current revolution (MOOC, inverted lectures, peer instruction, game design) and, based on the lessons learned earlier, argues why video recording is so successful. Given the fact that this is the decade we lost print (papers, printed books, book shops, libraries), the author then conjectures that the impact of the Internet will make this revolution different from previous ones in that most of the changes are irreversible. As a consequence he warns against storming ahead blindly and suggests to conserve - while it is still possible - valuable components of what might soon be called the antebellum age of education.