969 resultados para Computer generated works


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Please consult the paper edition of this thesis to read. It is available on the 5th Floor of the Library at Call Number: Z 9999 E38 D56 1992

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis work is dedicated to use the computer-algebraic approach for dealing with the group symmetries and studying the symmetry properties of molecules and clusters. The Maple package Bethe, created to extract and manipulate the group-theoretical data and to simplify some of the symmetry applications, is introduced. First of all the advantages of using Bethe to generate the group theoretical data are demonstrated. In the current version, the data of 72 frequently applied point groups can be used, together with the data for all of the corresponding double groups. The emphasize of this work is placed to the applications of this package in physics of molecules and clusters. Apart from the analysis of the spectral activity of molecules with point-group symmetry, it is demonstrated how Bethe can be used to understand the field splitting in crystals or to construct the corresponding wave functions. Several examples are worked out to display (some of) the present features of the Bethe program. While we cannot show all the details explicitly, these examples certainly demonstrate the great potential in applying computer algebraic techniques to study the symmetry properties of molecules and clusters. A special attention is placed in this thesis work on the flexibility of the Bethe package, which makes it possible to implement another applications of symmetry. This implementation is very reasonable, because some of the most complicated steps of the possible future applications are already realized within the Bethe. For instance, the vibrational coordinates in terms of the internal displacement vectors for the Wilson's method and the same coordinates in terms of cartesian displacement vectors as well as the Clebsch-Gordan coefficients for the Jahn-Teller problem are generated in the present version of the program. For the Jahn-Teller problem, moreover, use of the computer-algebraic tool seems to be even inevitable, because this problem demands an analytical access to the adiabatic potential and, therefore, can not be realized by the numerical algorithm. However, the ability of the Bethe package is not exhausted by applications, mentioned in this thesis work. There are various directions in which the Bethe program could be developed in the future. Apart from (i) studying of the magnetic properties of materials and (ii) optical transitions, interest can be pointed out for (iii) the vibronic spectroscopy, and many others. Implementation of these applications into the package can make Bethe a much more powerful tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Es ist allgemein bekannt, dass sich zwei gegebene Systeme spezieller Funktionen durch Angabe einer Rekursionsgleichung und entsprechend vieler Anfangswerte identifizieren lassen, denn computeralgebraisch betrachtet hat man damit eine Normalform vorliegen. Daher hat sich die interessante Forschungsfrage ergeben, Funktionensysteme zu identifizieren, die über ihre Rodriguesformel gegeben sind. Zieht man den in den 1990er Jahren gefundenen Zeilberger-Algorithmus für holonome Funktionenfamilien hinzu, kann die Rodriguesformel algorithmisch in eine Rekursionsgleichung überführt werden. Falls die Funktionenfamilie überdies hypergeometrisch ist, sogar laufzeiteffizient. Um den Zeilberger-Algorithmus überhaupt anwenden zu können, muss es gelingen, die Rodriguesformel in eine Summe umzuwandeln. Die vorliegende Arbeit beschreibt die Umwandlung einer Rodriguesformel in die genannte Normalform für den kontinuierlichen, den diskreten sowie den q-diskreten Fall vollständig. Das in Almkvist und Zeilberger (1990) angegebene Vorgehen im kontinuierlichen Fall, wo die in der Rodriguesformel auftauchende n-te Ableitung über die Cauchysche Integralformel in ein komplexes Integral überführt wird, zeigt sich im diskreten Fall nun dergestalt, dass die n-te Potenz des Vorwärtsdifferenzenoperators in eine Summenschreibweise überführt wird. Die Rekursionsgleichung aus dieser Summe zu generieren, ist dann mit dem diskreten Zeilberger-Algorithmus einfach. Im q-Fall wird dargestellt, wie Rekursionsgleichungen aus vier verschiedenen q-Rodriguesformeln gewonnen werden können, wobei zunächst die n-te Potenz der jeweiligen q-Operatoren in eine Summe überführt wird. Drei der vier Summenformeln waren bislang unbekannt. Sie wurden experimentell gefunden und per vollständiger Induktion bewiesen. Der q-Zeilberger-Algorithmus erzeugt anschließend aus diesen Summen die gewünschte Rekursionsgleichung. In der Praxis ist es sinnvoll, den schnellen Zeilberger-Algorithmus anzuwenden, der Rekursionsgleichungen für bestimmte Summen über hypergeometrische Terme ausgibt. Auf dieser Fassung des Algorithmus basierend wurden die Überlegungen in Maple realisiert. Es ist daher sinnvoll, dass alle hier aufgeführten Prozeduren, die aus kontinuierlichen, diskreten sowie q-diskreten Rodriguesformeln jeweils Rekursionsgleichungen erzeugen, an den hypergeometrischen Funktionenfamilien der klassischen orthogonalen Polynome, der klassischen diskreten orthogonalen Polynome und an der q-Hahn-Klasse des Askey-Wilson-Schemas vollständig getestet werden. Die Testergebnisse liegen tabellarisch vor. Ein bedeutendes Forschungsergebnis ist, dass mit der im q-Fall implementierten Prozedur zur Erzeugung einer Rekursionsgleichung aus der Rodriguesformel bewiesen werden konnte, dass die im Standardwerk von Koekoek/Lesky/Swarttouw(2010) angegebene Rodriguesformel der Stieltjes-Wigert-Polynome nicht korrekt ist. Die richtige Rodriguesformel wurde experimentell gefunden und mit den bereitgestellten Methoden bewiesen. Hervorzuheben bleibt, dass an Stelle von Rekursionsgleichungen analog Differential- bzw. Differenzengleichungen für die Identifikation erzeugt wurden. Wie gesagt gehört zu einer Normalform für eine holonome Funktionenfamilie die Angabe der Anfangswerte. Für den kontinuierlichen Fall wurden umfangreiche, in dieser Gestalt in der Literatur noch nie aufgeführte Anfangswertberechnungen vorgenommen. Im diskreten Fall musste für die Anfangswertberechnung zur Differenzengleichung der Petkovsek-van-Hoeij-Algorithmus hinzugezogen werden, um die hypergeometrischen Lösungen der resultierenden Rekursionsgleichungen zu bestimmen. Die Arbeit stellt zu Beginn den schnellen Zeilberger-Algorithmus in seiner kontinuierlichen, diskreten und q-diskreten Variante vor, der das Fundament für die weiteren Betrachtungen bildet. Dabei wird gebührend auf die Unterschiede zwischen q-Zeilberger-Algorithmus und diskretem Zeilberger-Algorithmus eingegangen. Bei der praktischen Umsetzung wird Bezug auf die in Maple umgesetzten Zeilberger-Implementationen aus Koepf(1998/2014) genommen. Die meisten der umgesetzten Prozeduren werden im Text dokumentiert. Somit wird ein vollständiges Paket an Algorithmen bereitgestellt, mit denen beispielsweise Formelsammlungen für hypergeometrische Funktionenfamilien überprüft werden können, deren Rodriguesformeln bekannt sind. Gleichzeitig kann in Zukunft für noch nicht erforschte hypergeometrische Funktionenklassen die beschreibende Rekursionsgleichung erzeugt werden, wenn die Rodriguesformel bekannt ist.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tuesday 22nd April 2014 Speaker(s): Sue Sentance Organiser: Leslie Carr Time: 22/04/2014 15:00-16:00 Location: B32/3077 File size: 698 Mb Abstract Until recently, "computing" education in English schools mainly focused on developing general Digital Literacy and Microsoft Office skills. As of this September, a new curriculum comes into effect that provides a strong emphasis on computation and programming. This change has generated some controversy in the news media (4-year-olds being forced to learn coding! boss of the government’s coding education initiative cannot code shock horror!!!!) and also some concern in the teaching profession (how can we possibly teach programming when none of the teachers know how to program)? Dr Sue Sentance will explain the work of Computing At School, a part of the BCS Academy, in galvanising universities to help teachers learn programming and other computing skills. Come along and find out about the new English Computing Revolution - How will your children and your schools be affected? - How will our University intake change? How will our degrees have to change? - What is happening to the national perception of Computer Science?

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to gain a better understanding of online conceptual collaborative design processes this paper investigates how student designers make use of a shared virtual synchronous environment when engaged in conceptual design. The software enables users to talk to each other and share sketches when they are remotely located. The paper describes a novel methodology for observing and analysing collaborative design processes by adapting the concepts of grounded theory. Rather than concentrating on narrow aspects of the final artefacts, emerging “themes” are generated that provide a broader picture of collaborative design process and context descriptions. Findings on the themes of “grounding – mutual understanding” and “support creativity” complement findings from other research, while important themes associated with “near-synchrony” have not been emphasised in other research. From the study, a series of design recommendations are made for the development of tools to support online computer-supported collaborative work in design using a shared virtual environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Long distance dispersal (LDD) plays an important role in many population processes like colonization, range expansion, and epidemics. LDD of small particles like fungal spores is often a result of turbulent wind dispersal and is best described by functions with power-law behavior in the tails ("fat tailed"). The influence of fat-tailed LDD on population genetic structure is reported in this article. In computer simulations, the population structure generated by power-law dispersal with exponents in the range of -2 to -1, in distinct contrast to that generated by exponential dispersal, has a fractal structure. As the power-law exponent becomes smaller, the distribution of individual genotypes becomes more self-similar at different scales. Common statistics like G(ST) are not well suited to summarizing differences between the population genetic structures. Instead, fractal and self-similarity statistics demonstrated differences in structure arising from fat-tailed and exponential dispersal. When dispersal is fat tailed, a log-log plot of the Simpson index against distance between subpopulations has an approximately constant gradient over a large range of spatial scales. The fractal dimension D-2 is linearly inversely related to the power-law exponent, with a slope of similar to -2. In a large simulation arena, fat-tailed LDD allows colonization of the entire space by all genotypes whereas exponentially bounded dispersal eventually confines all descendants of a single clonal lineage to a relatively small area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a method of simulating both the avalanche and surge components of pyroclastic flows generated by lava collapsing from a growing Pelean dome. This is used to successfully model the pyroclastic flows generated on 12 May 1996 by the Soufriere Hills volcano, Montserrat. In simulating the avalanche component we use a simple 3-fold parameterisation of flow acceleration for which we choose values using an inverse method. The surge component is simulated by a 1D hydraulic balance of sedimentation of clasts and entrainment of air away from the avalanche source. We show how multiple simulations based on uncertainty of the starting conditions and parameters, specifically location and size (mass flux), could be used to map hazard zones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Brain-computer music interface (BCMI) is developed to allow for continuous modification of the tempo of dynamically generated music. Six out of seven participants are able to control the BCMI at significant accuracies and their performance is observed to increase over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Texture is one of the most important visual attributes for image analysis. It has been widely used in image analysis and pattern recognition. A partially self-avoiding deterministic walk has recently been proposed as an approach for texture analysis with promising results. This approach uses walkers (called tourists) to exploit the gray scale image contexts in several levels. Here, we present an approach to generate graphs out of the trajectories produced by the tourist walks. The generated graphs embody important characteristics related to tourist transitivity in the image. Computed from these graphs, the statistical position (degree mean) and dispersion (entropy of two vertices with the same degree) measures are used as texture descriptors. A comparison with traditional texture analysis methods is performed to illustrate the high performance of this novel approach. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complex networks have been increasingly used in text analysis, including in connection with natural language processing tools, as important text features appear to be captured by the topology and dynamics of the networks. Following previous works that apply complex networks concepts to text quality measurement, summary evaluation, and author characterization, we now focus on machine translation (MT). In this paper we assess the possible representation of texts as complex networks to evaluate cross-linguistic issues inherent in manual and machine translation. We show that different quality translations generated by NIT tools can be distinguished from their manual counterparts by means of metrics such as in-(ID) and out-degrees (OD), clustering coefficient (CC), and shortest paths (SP). For instance, we demonstrate that the average OD in networks of automatic translations consistently exceeds the values obtained for manual ones, and that the CC values of source texts are not preserved for manual translations, but are for good automatic translations. This probably reflects the text rearrangements humans perform during manual translation. We envisage that such findings could lead to better NIT tools and automatic evaluation metrics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nursing school graduates are under pressure to pass the RN-NCLEX Exam on the first attempt since New York State monitors the results and uses them to evaluate the school’s nursing programs. Since the RN-NCLEX Exam is a standardized test, we sought a method to make our students better test takers. The use of on-line computer adaptive testing has raised our student’s standardized test scores at the end of the nursing course.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid development of data transfer through internet made it easier to send the data accurate and faster to the destination. There are many transmission media to transfer the data to destination like e-mails; at the same time it is may be easier to modify and misuse the valuable information through hacking. So, in order to transfer the data securely to the destination without any modifications, there are many approaches like cryptography and steganography. This paper deals with the image steganography as well as with the different security issues, general overview of cryptography, steganography and digital watermarking approaches.  The problem of copyright violation of multimedia data has increased due to the enormous growth of computer networks that provides fast and error free transmission of any unauthorized duplicate and possibly manipulated copy of multimedia information. In order to be effective for copyright protection, digital watermark must be robust which are difficult to remove from the object in which they are embedded despite a variety of possible attacks. The message to be send safe and secure, we use watermarking. We use invisible watermarking to embed the message using LSB (Least Significant Bit) steganographic technique. The standard LSB technique embed the message in every pixel, but my contribution for this proposed watermarking, works with the hint for embedding the message only on the image edges alone. If the hacker knows that the system uses LSB technique also, it cannot decrypt correct message. To make my system robust and secure, we added cryptography algorithm as Vigenere square. Whereas the message is transmitted in cipher text and its added advantage to the proposed system. The standard Vigenere square algorithm works with either lower case or upper case. The proposed cryptography algorithm is Vigenere square with extension of numbers also. We can keep the crypto key with combination of characters and numbers. So by using these modifications and updating in this existing algorithm and combination of cryptography and steganography method we develop a secure and strong watermarking method. Performance of this watermarking scheme has been analyzed by evaluating the robustness of the algorithm with PSNR (Peak Signal to Noise Ratio) and MSE (Mean Square Error) against the quality of the image for large amount of data. While coming to see results of the proposed encryption, higher value of 89dB of PSNR with small value of MSE is 0.0017. Then it seems the proposed watermarking system is secure and robust for hiding secure information in any digital system, because this system collect the properties of both steganography and cryptography sciences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Na unfolding method of linear intercept distributions and secction área distribution was implemented for structures with spherical grains. Although the unfolding routine depends on the grain shape, structures with spheroidal grains can also be treated by this routine. Grains of non-spheroidal shape can be treated only as approximation. A software was developed with two parts. The first part calculates the probability matrix. The second part uses this matrix and minimizes the chi-square. The results are presented with any number of size classes as required. The probability matrix was determined by means of the linear intercept and section area distributions created by computer simulation. Using curve fittings the probability matrix for spheres of any sizes could be determined. Two kinds of tests were carried out to prove the efficiency of the Technique. The theoretical tests represent ideal cases. The software was able to exactly find the proposed grain size distribution. In the second test, a structure was simulated in computer and images of its slices were used to produce the corresponding linear intercept the section area distributions. These distributions were then unfolded. This test simulates better reality. The results show deviations from the real size distribution. This deviations are caused by statistic fluctuation. The unfolding of the linear intercept distribution works perfectly, but the unfolding of section area distribution does not work due to a failure in the chi-square minimization. The minimization method uses a matrix inversion routine. The matrix generated by this procedure cannot be inverted. Other minimization method must be used