932 resultados para efficient algorithm
Resumo:
In this paper, an improved technique for evolving wavelet coefficients refined for compression and reconstruction of fingerprint images is presented. The FBI fingerprint compression standard [1, 2] uses the cdf 9/7 wavelet filter coefficients. Lifting scheme is an efficient way to represent classical wavelets with fewer filter coefficients [3, 4]. Here Genetic algorithm (GA) is used to evolve better lifting filter coefficients for cdf 9/7 wavelet to compress and reconstruct fingerprint images with better quality. Since the lifting filter coefficients are few in numbers compared to the corresponding classical wavelet filter coefficients, they are evolved at a faster rate using GA. A better reconstructed image quality in terms of Peak-Signal-to-Noise-Ratio (PSNR) is achieved with the best lifting filter coefficients evolved for a compression ratio 16:1. These evolved coefficients perform well for other compression ratios also.
Resumo:
Considerable research effort has been devoted in predicting the exon regions of genes. The binary indicator (BI), Electron ion interaction pseudo potential (EIIP), Filter method are some of the methods. All these methods make use of the period three behavior of the exon region. Even though the method suggested in this paper is similar to above mentioned methods , it introduces a set of sequences for mapping the nucleotides selected by applying genetic algorithm and found to be more promising
Resumo:
Combinational digital circuits can be evolved automatically using Genetic Algorithms (GA). Until recently this technique used linear chromosomes and and one dimensional crossover and mutation operators. In this paper, a new method for representing combinational digital circuits as 2 Dimensional (2D) chromosomes and suitable 2D crossover and mutation techniques has been proposed. By using this method, the convergence speed of GA can be increased significantly compared to the conventional methods. Moreover, the 2D representation and crossover operation provides the designer with better visualization of the evolved circuits. In addition to this, a technique to display automatically the evolved circuits has been developed with the help of MATLAB
Resumo:
This paper presents a new approach to the design of combinational digital circuits with multiplexers using Evolutionary techniques. Genetic Algorithm (GA) is used as the optimization tool. Several circuits are synthesized with this method and compared with two design techniques such as standard implementation of logic functions using multiplexers and implementation using Shannon’s decomposition technique using GA. With the proposed method complexity of the circuit and the associated delay can be reduced significantly
Resumo:
Bank switching in embedded processors having partitioned memory architecture results in code size as well as run time overhead. An algorithm and its application to assist the compiler in eliminating the redundant bank switching codes introduced and deciding the optimum data allocation to banked memory is presented in this work. A relation matrix formed for the memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Data allocation to memory is done by considering all possible permutation of memory banks and combination of data. The compiler output corresponding to each data mapping scheme is subjected to a static machine code analysis which identifies the one with minimum number of bank switching codes. Even though the method is compiler independent, the algorithm utilizes certain architectural features of the target processor. A prototype based on PIC 16F87X microcontrollers is described. This method scales well into larger number of memory blocks and other architectures so that high performance compilers can integrate this technique for efficient code generation. The technique is illustrated with an example
Resumo:
Presently different audio watermarking methods are available; most of them inclined towards copyright protection and copy protection. This is the key motive for the notion to develop a speaker verification scheme that guar- antees non-repudiation services and the thesis is its outcome. The research presented in this thesis scrutinizes the field of audio water- marking and the outcome is a speaker verification scheme that is proficient in addressing issues allied to non-repudiation to a great extent. This work aimed in developing novel audio watermarking schemes utilizing the fun- damental ideas of Fast-Fourier Transform (FFT) or Fast Walsh-Hadamard Transform (FWHT). The Mel-Frequency Cepstral Coefficients (MFCC) the best parametric representation of the acoustic signals along with few other key acoustic characteristics is employed in crafting of new schemes. The au- dio watermark created is entirely dependent to the acoustic features, hence named as FeatureMark and is crucial in this work. In any watermarking scheme, the quality of the extracted watermark de- pends exclusively on the pre-processing action and in this work framing and windowing techniques are involved. The theme non-repudiation provides immense significance in the audio watermarking schemes proposed in this work. Modification of the signal spectrum is achieved in a variety of ways by selecting appropriate FFT/FWHT coefficients and the watermarking schemes were evaluated for imperceptibility, robustness and capacity char- acteristics. The proposed schemes are unequivocally effective in terms of maintaining the sound quality, retrieving the embedded FeatureMark and in terms of the capacity to hold the mark bits. Robust nature of these marking schemes is achieved with the help of syn- chronization codes such as Barker Code with FFT based FeatureMarking scheme and Walsh Code with FWHT based FeatureMarking scheme. An- other important feature associated with this scheme is the employment of an encryption scheme towards the preparation of its FeatureMark that scrambles the signal features that helps to keep the signal features unreve- laed. A comparative study with the existing watermarking schemes and the ex- periments to evaluate imperceptibility, robustness and capacity tests guar- antee that the proposed schemes can be baselined as efficient audio water- marking schemes. The four new digital audio watermarking algorithms in terms of their performance are remarkable thereby opening more opportu- nities for further research.
Efficient phosphorus application strategies for increased crop production in sub-Saharan West Africa
Resumo:
Comparable data are lacking from the range of environments found in sub-Saharan West Africa to draw more general conclusions about the relative merits of locally available rockphosphate (RockP) in alleviating phosphorus (P) constraints to crop growth. To fill this gap, a multi-factorial field experiment was conducted over 4 years at eight locations in Niger, Burkina Faso and Togo. These ranged in annual rainfall from 510 to 1300 mm. Crops grown were pearl millet (Pennisetum glaucum L.), sorghum (Sorghum bicolor (L.) Moench) and maize (Zea mays L.) either continuously or in rotation with cowpea (Vigna unguiculata Walp.) and groundnut (Arachis hypogaea L.). Crops were subjected to six P fertiliser treatments comprising RockP and soluble P at different rates and combined with 0 and 60 kg N ha^-1. For legumes, time trend analyses showed P-induced total dry matter (TDM) increases between 28 and 72% only with groundnut. Similarly, rotation-induced raises in cereal TDM compared to cereal monoculture were only observed with groundnut. For cereals, at the same rate of application, RockP was comparable to single superphosphate (SSP) only at two millet sites with topsoil pH-KCl <4.2 and annual average rainfall >600 mm. Across the eight sites NPK placement at 0.4 g P per hill raised average cereal yields between 26 and 220%. This was confirmed in 119 on-farm trials revealing P placement as a promising strategy to overcome P deficiency as the regionally most growth limiting nutrient constraint to cereals.
Resumo:
Land use is a crucial link between human activities and the natural environment and one of the main driving forces of global environmental change. Large parts of the terrestrial land surface are used for agriculture, forestry, settlements and infrastructure. Given the importance of land use, it is essential to understand the multitude of influential factors and resulting land use patterns. An essential methodology to study and quantify such interactions is provided by the adoption of land-use models. By the application of land-use models, it is possible to analyze the complex structure of linkages and feedbacks and to also determine the relevance of driving forces. Modeling land use and land use changes has a long-term tradition. In particular on the regional scale, a variety of models for different regions and research questions has been created. Modeling capabilities grow with steady advances in computer technology, which on the one hand are driven by increasing computing power on the other hand by new methods in software development, e.g. object- and component-oriented architectures. In this thesis, SITE (Simulation of Terrestrial Environments), a novel framework for integrated regional sland-use modeling, will be introduced and discussed. Particular features of SITE are the notably extended capability to integrate models and the strict separation of application and implementation. These features enable efficient development, test and usage of integrated land-use models. On its system side, SITE provides generic data structures (grid, grid cells, attributes etc.) and takes over the responsibility for their administration. By means of a scripting language (Python) that has been extended by language features specific for land-use modeling, these data structures can be utilized and manipulated by modeling applications. The scripting language interpreter is embedded in SITE. The integration of sub models can be achieved via the scripting language or by usage of a generic interface provided by SITE. Furthermore, functionalities important for land-use modeling like model calibration, model tests and analysis support of simulation results have been integrated into the generic framework. During the implementation of SITE, specific emphasis was laid on expandability, maintainability and usability. Along with the modeling framework a land use model for the analysis of the stability of tropical rainforest margins was developed in the context of the collaborative research project STORMA (SFB 552). In a research area in Central Sulawesi, Indonesia, socio-environmental impacts of land-use changes were examined. SITE was used to simulate land-use dynamics in the historical period of 1981 to 2002. Analogous to that, a scenario that did not consider migration in the population dynamics, was analyzed. For the calculation of crop yields and trace gas emissions, the DAYCENT agro-ecosystem model was integrated. In this case study, it could be shown that land-use changes in the Indonesian research area could mainly be characterized by the expansion of agricultural areas at the expense of natural forest. For this reason, the situation had to be interpreted as unsustainable even though increased agricultural use implied economic improvements and higher farmers' incomes. Due to the importance of model calibration, it was explicitly addressed in the SITE architecture through the introduction of a specific component. The calibration functionality can be used by all SITE applications and enables largely automated model calibration. Calibration in SITE is understood as a process that finds an optimal or at least adequate solution for a set of arbitrarily selectable model parameters with respect to an objective function. In SITE, an objective function typically is a map comparison algorithm capable of comparing a simulation result to a reference map. Several map optimization and map comparison methodologies are available and can be combined. The STORMA land-use model was calibrated using a genetic algorithm for optimization and the figure of merit map comparison measure as objective function. The time period for the calibration ranged from 1981 to 2002. For this period, respective reference land-use maps were compiled. It could be shown, that an efficient automated model calibration with SITE is possible. Nevertheless, the selection of the calibration parameters required detailed knowledge about the underlying land-use model and cannot be automated. In another case study decreases in crop yields and resulting losses in income from coffee cultivation were analyzed and quantified under the assumption of four different deforestation scenarios. For this task, an empirical model, describing the dependence of bee pollination and resulting coffee fruit set from the distance to the closest natural forest, was integrated. Land-use simulations showed, that depending on the magnitude and location of ongoing forest conversion, pollination services are expected to decline continuously. This results in a reduction of coffee yields of up to 18% and a loss of net revenues per hectare of up to 14%. However, the study also showed that ecological and economic values can be preserved if patches of natural vegetation are conservated in the agricultural landscape. -----------------------------------------------------------------------
Resumo:
Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.
Resumo:
Die q-Analysis ist eine spezielle Diskretisierung der Analysis auf einem Gitter, welches eine geometrische Folge darstellt, und findet insbesondere in der Quantenphysik eine breite Anwendung, ist aber auch in der Theorie der q-orthogonalen Polynome und speziellen Funktionen von großer Bedeutung. Die betrachteten mathematischen Objekte aus der q-Welt weisen meist eine recht komplizierte Struktur auf und es liegt daher nahe, sie mit Computeralgebrasystemen zu behandeln. In der vorliegenden Dissertation werden Algorithmen für q-holonome Funktionen und q-hypergeometrische Reihen vorgestellt. Alle Algorithmen sind in dem Maple-Package qFPS, welches integraler Bestandteil der Arbeit ist, implementiert. Nachdem in den ersten beiden Kapiteln Grundlagen geschaffen werden, werden im dritten Kapitel Algorithmen präsentiert, mit denen man zu einer q-holonomen Funktion q-holonome Rekursionsgleichungen durch Kenntnis derer q-Shifts aufstellen kann. Operationen mit q-holonomen Rekursionen werden ebenfalls behandelt. Im vierten Kapitel werden effiziente Methoden zur Bestimmung polynomialer, rationaler und q-hypergeometrischer Lösungen von q-holonomen Rekursionen beschrieben. Das fünfte Kapitel beschäftigt sich mit q-hypergeometrischen Potenzreihen bzgl. spezieller Polynombasen. Wir formulieren einen neuen Algorithmus, der zu einer q-holonomen Rekursionsgleichung einer q-hypergeometrischen Reihe mit nichttrivialem Entwicklungspunkt die entsprechende q-holonome Rekursionsgleichung für die Koeffizienten ermittelt. Ferner können wir einen neuen Algorithmus angeben, der umgekehrt zu einer q-holonomen Rekursionsgleichung für die Koeffizienten eine q-holonome Rekursionsgleichung der Reihe bestimmt und der nützlich ist, um q-holonome Rekursionen für bestimmte verallgemeinerte q-hypergeometrische Funktionen aufzustellen. Mit Formulierung des q-Taylorsatzes haben wir schließlich alle Zutaten zusammen, um das Hauptergebnis dieser Arbeit, das q-Analogon des FPS-Algorithmus zu erhalten. Wolfram Koepfs FPS-Algorithmus aus dem Jahre 1992 bestimmt zu einer gegebenen holonomen Funktion die entsprechende hypergeometrische Reihe. Wir erweitern den Algorithmus dahingehend, dass sogar Linearkombinationen q-hypergeometrischer Potenzreihen bestimmt werden können. ________________________________________________________________________________________________________________
Resumo:
In der algebraischen Kryptoanalyse werden moderne Kryptosysteme als polynomielle, nichtlineare Gleichungssysteme dargestellt. Das Lösen solcher Gleichungssysteme ist NP-hart. Es gibt also keinen Algorithmus, der in polynomieller Zeit ein beliebiges nichtlineares Gleichungssystem löst. Dennoch kann man aus modernen Kryptosystemen Gleichungssysteme mit viel Struktur generieren. So sind diese Gleichungssysteme bei geeigneter Modellierung quadratisch und dünn besetzt, damit nicht beliebig. Dafür gibt es spezielle Algorithmen, die eine Lösung solcher Gleichungssysteme finden. Ein Beispiel dafür ist der ElimLin-Algorithmus, der mit Hilfe von linearen Gleichungen das Gleichungssystem iterativ vereinfacht. In der Dissertation wird auf Basis dieses Algorithmus ein neuer Solver für quadratische, dünn besetzte Gleichungssysteme vorgestellt und damit zwei symmetrische Kryptosysteme angegriffen. Dabei sind die Techniken zur Modellierung der Chiffren von entscheidender Bedeutung, so das neue Techniken entwickelt werden, um Kryptosysteme darzustellen. Die Idee für das Modell kommt von Cube-Angriffen. Diese Angriffe sind besonders wirksam gegen Stromchiffren. In der Arbeit werden unterschiedliche Varianten klassifiziert und mögliche Erweiterungen vorgestellt. Das entstandene Modell hingegen, lässt sich auch erfolgreich auf Blockchiffren und auch auf andere Szenarien erweitern. Bei diesen Änderungen muss das Modell nur geringfügig geändert werden.
Resumo:
Web services from different partners can be combined to applications that realize a more complex business goal. Such applications built as Web service compositions define how interactions between Web services take place in order to implement the business logic. Web service compositions not only have to provide the desired functionality but also have to comply with certain Quality of Service (QoS) levels. Maximizing the users' satisfaction, also reflected as Quality of Experience (QoE), is a primary goal to be achieved in a Service-Oriented Architecture (SOA). Unfortunately, in a dynamic environment like SOA unforeseen situations might appear like services not being available or not responding in the desired time frame. In such situations, appropriate actions need to be triggered in order to avoid the violation of QoS and QoE constraints. In this thesis, proper solutions are developed to manage Web services and Web service compositions with regard to QoS and QoE requirements. The Business Process Rules Language (BPRules) was developed to manage Web service compositions when undesired QoS or QoE values are detected. BPRules provides a rich set of management actions that may be triggered for controlling the service composition and for improving its quality behavior. Regarding the quality properties, BPRules allows to distinguish between the QoS values as they are promised by the service providers, QoE values that were assigned by end-users, the monitored QoS as measured by our BPR framework, and the predicted QoS and QoE values. BPRules facilitates the specification of certain user groups characterized by different context properties and allows triggering a personalized, context-aware service selection tailored for the specified user groups. In a service market where a multitude of services with the same functionality and different quality values are available, the right services need to be selected for realizing the service composition. We developed new and efficient heuristic algorithms that are applied to choose high quality services for the composition. BPRules offers the possibility to integrate multiple service selection algorithms. The selection algorithms are applicable also for non-linear objective functions and constraints. The BPR framework includes new approaches for context-aware service selection and quality property predictions. We consider the location information of users and services as context dimension for the prediction of response time and throughput. The BPR framework combines all new features and contributions to a comprehensive management solution. Furthermore, it facilitates flexible monitoring of QoS properties without having to modify the description of the service composition. We show how the different modules of the BPR framework work together in order to execute the management rules. We evaluate how our selection algorithms outperform a genetic algorithm from related research. The evaluation reveals how context data can be used for a personalized prediction of response time and throughput.
Resumo:
Since no physical system can ever be completely isolated from its environment, the study of open quantum systems is pivotal to reliably and accurately control complex quantum systems. In practice, reliability of the control field needs to be confirmed via certification of the target evolution while accuracy requires the derivation of high-fidelity control schemes in the presence of decoherence. In the first part of this thesis an algebraic framework is presented that allows to determine the minimal requirements on the unique characterisation of arbitrary unitary gates in open quantum systems, independent on the particular physical implementation of the employed quantum device. To this end, a set of theorems is devised that can be used to assess whether a given set of input states on a quantum channel is sufficient to judge whether a desired unitary gate is realised. This allows to determine the minimal input for such a task, which proves to be, quite remarkably, independent of system size. These results allow to elucidate the fundamental limits regarding certification and tomography of open quantum systems. The combination of these insights with state-of-the-art Monte Carlo process certification techniques permits a significant improvement of the scaling when certifying arbitrary unitary gates. This improvement is not only restricted to quantum information devices where the basic information carrier is the qubit but it also extends to systems where the fundamental informational entities can be of arbitary dimensionality, the so-called qudits. The second part of this thesis concerns the impact of these findings from the point of view of Optimal Control Theory (OCT). OCT for quantum systems utilises concepts from engineering such as feedback and optimisation to engineer constructive and destructive interferences in order to steer a physical process in a desired direction. It turns out that the aforementioned mathematical findings allow to deduce novel optimisation functionals that significantly reduce not only the required memory for numerical control algorithms but also the total CPU time required to obtain a certain fidelity for the optimised process. The thesis concludes by discussing two problems of fundamental interest in quantum information processing from the point of view of optimal control - the preparation of pure states and the implementation of unitary gates in open quantum systems. For both cases specific physical examples are considered: for the former the vibrational cooling of molecules via optical pumping and for the latter a superconducting phase qudit implementation. In particular, it is illustrated how features of the environment can be exploited to reach the desired targets.
Resumo:
Es ist allgemein bekannt, dass sich zwei gegebene Systeme spezieller Funktionen durch Angabe einer Rekursionsgleichung und entsprechend vieler Anfangswerte identifizieren lassen, denn computeralgebraisch betrachtet hat man damit eine Normalform vorliegen. Daher hat sich die interessante Forschungsfrage ergeben, Funktionensysteme zu identifizieren, die über ihre Rodriguesformel gegeben sind. Zieht man den in den 1990er Jahren gefundenen Zeilberger-Algorithmus für holonome Funktionenfamilien hinzu, kann die Rodriguesformel algorithmisch in eine Rekursionsgleichung überführt werden. Falls die Funktionenfamilie überdies hypergeometrisch ist, sogar laufzeiteffizient. Um den Zeilberger-Algorithmus überhaupt anwenden zu können, muss es gelingen, die Rodriguesformel in eine Summe umzuwandeln. Die vorliegende Arbeit beschreibt die Umwandlung einer Rodriguesformel in die genannte Normalform für den kontinuierlichen, den diskreten sowie den q-diskreten Fall vollständig. Das in Almkvist und Zeilberger (1990) angegebene Vorgehen im kontinuierlichen Fall, wo die in der Rodriguesformel auftauchende n-te Ableitung über die Cauchysche Integralformel in ein komplexes Integral überführt wird, zeigt sich im diskreten Fall nun dergestalt, dass die n-te Potenz des Vorwärtsdifferenzenoperators in eine Summenschreibweise überführt wird. Die Rekursionsgleichung aus dieser Summe zu generieren, ist dann mit dem diskreten Zeilberger-Algorithmus einfach. Im q-Fall wird dargestellt, wie Rekursionsgleichungen aus vier verschiedenen q-Rodriguesformeln gewonnen werden können, wobei zunächst die n-te Potenz der jeweiligen q-Operatoren in eine Summe überführt wird. Drei der vier Summenformeln waren bislang unbekannt. Sie wurden experimentell gefunden und per vollständiger Induktion bewiesen. Der q-Zeilberger-Algorithmus erzeugt anschließend aus diesen Summen die gewünschte Rekursionsgleichung. In der Praxis ist es sinnvoll, den schnellen Zeilberger-Algorithmus anzuwenden, der Rekursionsgleichungen für bestimmte Summen über hypergeometrische Terme ausgibt. Auf dieser Fassung des Algorithmus basierend wurden die Überlegungen in Maple realisiert. Es ist daher sinnvoll, dass alle hier aufgeführten Prozeduren, die aus kontinuierlichen, diskreten sowie q-diskreten Rodriguesformeln jeweils Rekursionsgleichungen erzeugen, an den hypergeometrischen Funktionenfamilien der klassischen orthogonalen Polynome, der klassischen diskreten orthogonalen Polynome und an der q-Hahn-Klasse des Askey-Wilson-Schemas vollständig getestet werden. Die Testergebnisse liegen tabellarisch vor. Ein bedeutendes Forschungsergebnis ist, dass mit der im q-Fall implementierten Prozedur zur Erzeugung einer Rekursionsgleichung aus der Rodriguesformel bewiesen werden konnte, dass die im Standardwerk von Koekoek/Lesky/Swarttouw(2010) angegebene Rodriguesformel der Stieltjes-Wigert-Polynome nicht korrekt ist. Die richtige Rodriguesformel wurde experimentell gefunden und mit den bereitgestellten Methoden bewiesen. Hervorzuheben bleibt, dass an Stelle von Rekursionsgleichungen analog Differential- bzw. Differenzengleichungen für die Identifikation erzeugt wurden. Wie gesagt gehört zu einer Normalform für eine holonome Funktionenfamilie die Angabe der Anfangswerte. Für den kontinuierlichen Fall wurden umfangreiche, in dieser Gestalt in der Literatur noch nie aufgeführte Anfangswertberechnungen vorgenommen. Im diskreten Fall musste für die Anfangswertberechnung zur Differenzengleichung der Petkovsek-van-Hoeij-Algorithmus hinzugezogen werden, um die hypergeometrischen Lösungen der resultierenden Rekursionsgleichungen zu bestimmen. Die Arbeit stellt zu Beginn den schnellen Zeilberger-Algorithmus in seiner kontinuierlichen, diskreten und q-diskreten Variante vor, der das Fundament für die weiteren Betrachtungen bildet. Dabei wird gebührend auf die Unterschiede zwischen q-Zeilberger-Algorithmus und diskretem Zeilberger-Algorithmus eingegangen. Bei der praktischen Umsetzung wird Bezug auf die in Maple umgesetzten Zeilberger-Implementationen aus Koepf(1998/2014) genommen. Die meisten der umgesetzten Prozeduren werden im Text dokumentiert. Somit wird ein vollständiges Paket an Algorithmen bereitgestellt, mit denen beispielsweise Formelsammlungen für hypergeometrische Funktionenfamilien überprüft werden können, deren Rodriguesformeln bekannt sind. Gleichzeitig kann in Zukunft für noch nicht erforschte hypergeometrische Funktionenklassen die beschreibende Rekursionsgleichung erzeugt werden, wenn die Rodriguesformel bekannt ist.
Resumo:
We develop an algorithm that computes the gravitational potentials and forces on N point-masses interacting in three-dimensional space. The algorithm, based on analytical techniques developed by Rokhlin and Greengard, runs in order N time. In contrast to other fast N-body methods such as tree codes, which only approximate the interaction potentials and forces, this method is exact ?? computes the potentials and forces to within any prespecified tolerance up to machine precision. We present an implementation of the algorithm for a sequential machine. We numerically verify the algorithm, and compare its speed with that of an O(N2) direct force computation. We also describe a parallel version of the algorithm that runs on the Connection Machine in order 0(logN) time. We compare experimental results with those of the sequential implementation and discuss how to minimize communication overhead on the parallel machine.