880 resultados para Assembler language (Computer program language)


Relevância:

50.00% 50.00%

Publicador:

Resumo:

A new language concept for high-level distributed programming is proposed. Programs are organised as a collection of concurrently executing processes. Some of these processes, referred to as liaison processes, have a monitor-like structure and contain ports which may be invoked by other processes for the purposes of synchronisation and communication. Synchronisation is achieved by conditional activation of ports and also through port control constructs which may directly specify the execution ordering of ports. These constructs implement a path-expression-like mechanism for synchronisation and are also equipped with options to provide conditional, non-deterministic and priority ordering of ports. The usefulness and expressive power of the proposed concepts are illustrated through solutions of several representative programming problems. Some implementation issues are also considered.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The method of structured programming or program development using a top-down, stepwise refinement technique provides a systematic approach for the development of programs of considerable complexity. The aim of this paper is to present the philosophy of structured programming through a case study of a nonnumeric programming task. The problem of converting a well-formed formula in first-order logic into prenex normal form is considered. The program has been coded in the programming language PASCAL and implemented on a DEC-10 system. The program has about 500 lines of code and comprises 11 procedures.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Accurate supersymmetric spectra are required to confront data from direct and indirect searches of supersymmetry. SuSeFLAV is a numerical tool capable of computing supersymmetric spectra precisely for various supersymmetric breaking scenarios applicable even in the presence of flavor violation. The program solves MSSM RGEs with complete 3 x 3 flavor mixing at 2-loop level and one loop finite threshold corrections to all MSSM parameters by incorporating radiative electroweak symmetry breaking conditions. The program also incorporates the Type-I seesaw mechanism with three massive right handed neutrinos at user defined mass scales and mixing. It also computes branching ratios of flavor violating processes such as l(j) -> l(i)gamma, l(j) -> 3 l(i), b -> s gamma and supersymmetric contributions to flavor conserving quantities such as (g(mu) - 2). A large choice of executables suitable for various operations of the program are provided. Program summary Program title: SuSeFLAV Catalogue identifier: AEOD_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEOD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 76552 No. of bytes in distributed program, including test data, etc.: 582787 Distribution format: tar.gz Programming language: Fortran 95. Computer: Personal Computer, Work-Station. Operating system: Linux, Unix. Classification: 11.6. Nature of problem: Determination of masses and mixing of supersymmetric particles within the context of MSSM with conserved R-parity with and without the presence of Type-I seesaw. Inter-generational mixing is considered while calculating the mass spectrum. Supersymmetry breaking parameters are taken as inputs at a high scale specified by the mechanism of supersymmetry breaking. RG equations including full inter-generational mixing are then used to evolve these parameters up to the electroweak breaking scale. The low energy supersymmetric spectrum is calculated at the scale where successful radiative electroweak symmetry breaking occurs. At weak scale standard model fermion masses, gauge couplings are determined including the supersymmetric radiative corrections. Once the spectrum is computed, the program proceeds to various lepton flavor violating observables (e.g., BR(mu -> e gamma), BR(tau -> mu gamma) etc.) at the weak scale. Solution method: Two loop RGEs with full 3 x 3 flavor mixing for all supersymmetry breaking parameters are used to compute the low energy supersymmetric mass spectrum. An adaptive step size Runge-Kutta method is used to solve the RGEs numerically between the high scale and the electroweak breaking scale. Iterative procedure is employed to get the consistent radiative electroweak symmetry breaking condition. The masses of the supersymmetric particles are computed at 1-loop order. The third generation SM particles and the gauge couplings are evaluated at the 1-loop order including supersymmetric corrections. A further iteration of the full program is employed such that the SM masses and couplings are consistent with the supersymmetric particle spectrum. Additional comments: Several executables are presented for the user. Running time: 0.2 s on a Intel(R) Core(TM) i5 CPU 650 with 3.20 GHz. (c) 2012 Elsevier B.V. All rights reserved.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Identifying translations from comparable corpora is a well-known problem with several applications, e.g. dictionary creation in resource-scarce languages. Scarcity of high quality corpora, especially in Indian languages, makes this problem hard, e.g. state-of-the-art techniques achieve a mean reciprocal rank (MRR) of 0.66 for English-Italian, and a mere 0.187 for Telugu-Kannada. There exist comparable corpora in many Indian languages with other ``auxiliary'' languages. We observe that translations have many topically related words in common in the auxiliary language. To model this, we define the notion of a translingual theme, a set of topically related words from auxiliary language corpora, and present a probabilistic framework for translation induction. Extensive experiments on 35 comparable corpora using English and French as auxiliary languages show that this approach can yield dramatic improvements in performance (e.g. MRR improves by 124% to 0.419 for Telugu-Kannada). A user study on WikiTSu, a system for cross-lingual Wikipedia title suggestion that uses our approach, shows a 20% improvement in the quality of titles suggested.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Graph algorithms have been shown to possess enough parallelism to keep several computing resources busy-even hundreds of cores on a GPU. Unfortunately, tuning their implementation for efficient execution on a particular hardware configuration of heterogeneous systems consisting of multicore CPUs and GPUs is challenging, time consuming, and error prone. To address these issues, we propose a domain-specific language (DSL), Falcon, for implementing graph algorithms that (i) abstracts the hardware, (ii) provides constructs to write explicitly parallel programs at a higher level, and (iii) can work with general algorithms that may change the graph structure (morph algorithms). We illustrate the usage of our DSL to implement local computation algorithms (that do not change the graph structure) and morph algorithms such as Delaunay mesh refinement, survey propagation, and dynamic SSSP on GPU and multicore CPUs. Using a set of benchmark graphs, we illustrate that the generated code performs close to the state-of-the-art hand-tuned implementations.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Existing devices for communicating information to computers are bulky, slow to use, or unreliable. Dasher is a new interface incorporating language modelling and driven by continuous two-dimensional gestures, e.g. a mouse, touchscreen, or eye-tracker. Tests have shown that this device can be used to enter text at a rate of up to 34 words per minute, compared with typical ten-finger keyboard typing of 40-60 words per minute. Although the interface is slower than a conventional keyboard, it is small and simple, and could be used on personal data assistants and by motion-impaired computer users.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In the last decades big improvements have been done in the field of computer aided learning, based on improvements done in computer science and computer systems. Although the field has been always a bit lagged, without using the latest solutions, it has constantly gone forward taking profit of the innovations as they show up. As long as the train of the computer science does not stop (and it won’t at least in the near future) the systems that take profit of those improvements will not either, because we humans will always need to study; Sometimes for pleasure and some other many times out of need. Not all the attempts in the field of computer aided learning have been in the same direction. Most of them address one or some few of the problems that show while studying and don’t take into account solutions proposed for some other problems. The reasons for this can be varied. Sometimes the solutions simply are not compatible. Some other times, because the project is an investigation it’s interesting to isolate the problem. And, in commercial products, licenses and patents often prevent the new projects to use previous work. The world moved forward and this is an attempt to use some of the options offered by technology, mixing some old ideas with new ones.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

For sign languages used by deaf communities, linguistic corpora have until recently been unavailable, due to the lack of a writing system and a written culture in these communities, and the very recent advent of digital video. Recent improvements in video and computer technology have now made larger sign language datasets possible; however, large sign language datasets that are fully machine-readable are still elusive. This is due to two challenges. 1. Inconsistencies that arise when signs are annotated by means of spoken/written language. 2. The fact that many parts of signed interaction are not necessarily fully composed of lexical signs (equivalent of words), instead consisting of constructions that are less conventionalised. As sign language corpus building progresses, the potential for some standards in annotation is beginning to emerge. But before this project, there were no attempts to standardise these practices across corpora, which is required to be able to compare data crosslinguistically. This project thus had the following aims: 1. To develop annotation standards for glosses (lexical/word level) 2. To test their reliability and validity 3. To improve current software tools that facilitate a reliable workflow Overall the project aimed not only to set a standard for the whole field of sign language studies throughout the world but also to make significant advances toward two of the world’s largest machine-readable datasets for sign languages – specifically the BSL Corpus (British Sign Language, http://bslcorpusproject.org) and the Corpus NGT (Sign Language of the Netherlands, http://www.ru.nl/corpusngt).

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper investigates a method of automatic pronunciation scoring for use in computer-assisted language learning (CALL) systems. The method utilizes a likelihood-based `Goodness of Pronunciation' (GOP) measure which is extended to include individual thresholds for each phone based on both averaged native confidence scores and on rejection statistics provided by human judges. Further improvements are obtained by incorporating models of the subject's native language and by augmenting the recognition networks to include expected pronunciation errors. The various GOP measures are assessed using a specially recorded database of non-native speakers which has been annotated to mark phone-level pronunciation errors. Since pronunciation assessment is highly subjective, a set of four performance measures has been designed, each of them measuring different aspects of how well computer-derived phone-level scores agree with human scores. These performance measures are used to cross-validate the reference annotations and to assess the basic GOP algorithm and its refinements. The experimental results suggest that a likelihood-based pronunciation scoring metric can achieve usable performance, especially after applying the various enhancements.