912 resultados para Language design
Resumo:
Vita.
Resumo:
Metaphor is a multi-stage programming language extension to an imperative, object-oriented language in the style of C# or Java. This paper discusses some issues we faced when applying multi-stage language design concepts to an imperative base language and run-time environment. The issues range from dealing with pervasive references and open code to garbage collection and implementing cross-stage persistence.
Resumo:
This paper explores the literature and analyses the different uses and understandings of the word “design” in Portuguese colonised countries, using Brazil as the main example. It investigates the relationship between the linguistic existence of terms to define and describe “design” as an activity and field, and the roles and perceptions of Design by the general society. It also addresses the effects that the lack of a proper translation causes on the local community from a cultural point of view. The current perception of Design in Portuguese colonies is associated to two main aspects: linguistic and historical. Both of them differentiate the countries taken into consideration from other countries that have a different background. The changes associated to the meaning of “design” throughout the years, caused a great impact on the perceptions that people have about Design. On the other hand, the development of Design has also influenced the changes on the meaning of the term, as a result of the legacy from the colonisation period and also as a characteristic of the Portuguese language. Design has developed and reached a level of excellence in Portuguese colonised countries that competes with the most traditional Design cultures in the world. However, this level of Design is enmeshed into an elite belonging to universities and specialised markets, therefore Design is not democratised. The ultimate aim of this study is to promote discussions on how to make the discourse surrounding this area more accessible to people from non-English speaking countries that do not have the word “design” in their local language.
Resumo:
Motivated by accurate average-case analysis, MOdular Quantitative Analysis (MOQA) is developed at the Centre for Efficiency Oriented Languages (CEOL). In essence, MOQA allows the programmer to determine the average running time of a broad class of programmes directly from the code in a (semi-)automated way. The MOQA approach has the property of randomness preservation which means that applying any operation to a random structure, results in an output isomorphic to one or more random structures, which is key to systematic timing. Based on original MOQA research, we discuss the design and implementation of a new domain specific scripting language based on randomness preserving operations and random structures. It is designed to facilitate compositional timing by systematically tracking the distributions of inputs and outputs. The notion of a labelled partial order (LPO) is the basic data type in the language. The programmer uses built-in MOQA operations together with restricted control flow statements to design MOQA programs. This MOQA language is formally specified both syntactically and semantically in this thesis. A practical language interpreter implementation is provided and discussed. By analysing new algorithms and data restructuring operations, we demonstrate the wide applicability of the MOQA approach. Also we extend MOQA theory to a number of other domains besides average-case analysis. We show the strong connection between MOQA and parallel computing, reversible computing and data entropy analysis.
Resumo:
Secure Multi-party Computation (MPC) enables a set of parties to collaboratively compute, using cryptographic protocols, a function over their private data in a way that the participants do not see each other's data, they only see the final output. Typical MPC examples include statistical computations over joint private data, private set intersection, and auctions. While these applications are examples of monolithic MPC, richer MPC applications move between "normal" (i.e., per-party local) and "secure" (i.e., joint, multi-party secure) modes repeatedly, resulting overall in mixed-mode computations. For example, we might use MPC to implement the role of the dealer in a game of mental poker -- the game will be divided into rounds of local decision-making (e.g. bidding) and joint interaction (e.g. dealing). Mixed-mode computations are also used to improve performance over monolithic secure computations. Starting with the Fairplay project, several MPC frameworks have been proposed in the last decade to help programmers write MPC applications in a high-level language, while the toolchain manages the low-level details. However, these frameworks are either not expressive enough to allow writing mixed-mode applications or lack formal specification, and reasoning capabilities, thereby diminishing the parties' trust in such tools, and the programs written using them. Furthermore, none of the frameworks provides a verified toolchain to run the MPC programs, leaving the potential of security holes that can compromise the privacy of parties' data. This dissertation presents language-based techniques to make MPC more practical and trustworthy. First, it presents the design and implementation of a new MPC Domain Specific Language, called Wysteria, for writing rich mixed-mode MPC applications. Wysteria provides several benefits over previous languages, including a conceptual single thread of control, generic support for more than two parties, high-level abstractions for secret shares, and a fully formalized type system and operational semantics. Using Wysteria, we have implemented several MPC applications, including, for the first time, a card dealing application. The dissertation next presents Wys*, an embedding of Wysteria in F*, a full-featured verification oriented programming language. Wys* improves on Wysteria along three lines: (a) It enables programmers to formally verify the correctness and security properties of their programs. As far as we know, Wys* is the first language to provide verification capabilities for MPC programs. (b) It provides a partially verified toolchain to run MPC programs, and finally (c) It enables the MPC programs to use, with no extra effort, standard language constructs from the host language F*, thereby making it more usable and scalable. Finally, the dissertation develops static analyses that help optimize monolithic MPC programs into mixed-mode MPC programs, while providing similar privacy guarantees as the monolithic versions.
Resumo:
In this paper we consider the problem of scheduling expression trees on delayed-load architectures. The problem tackled here takes root from the one considered in [Proceedings of the ACM SIGPLAN '91 Conf. on Programming Language Design and Implementation, 1991. p. 256] in which the leaves of the expression trees all refer to memory locations. A generalization of this involves the situation in which the trees may contain register variables, with the registers being used only at the leaves. Solutions to this generalization are given in [ACM Trans. Prog. Lang. Syst. 17 (1995) 740, Microproc. Microprog. 40 (1994) 577]. This paper considers the most general case in which the registers are reusable. This problem is tackled in [Comput. Lang, 21 (1995) 49] which gives an approximate solution to the problem under certain assumptions about the contiguity of the evaluation order: Here we propose an optimal solution (which may involve even a non-contiguous evaluation of the tree). The schedule generated by the algorithm given in this paper is optimal in the sense that it is an interlock-free schedule which uses the minimum number of registers required. An extension to the algorithm incorporates spilling. The problem as stated in this paper is an instruction scheduling problem. However, the problem could also be rephrased as an operations research problem with a difference in terminology. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
Task dataflow languages simplify the specification of parallel programs by dynamically detecting and enforcing dependencies between tasks. These languages are, however, often restricted to a single level of parallelism. This language design is reflected in the runtime system, where a master thread explicitly generates a task graph and worker threads execute ready tasks and wake-up their dependents. Such an approach is incompatible with state-of-the-art schedulers such as the Cilk scheduler, that minimize the creation of idle tasks (work-first principle) and place all task creation and scheduling off the critical path. This paper proposes an extension to the Cilk scheduler in order to reconcile task dependencies with the work-first principle. We discuss the impact of task dependencies on the properties of the Cilk scheduler. Furthermore, we propose a low-overhead ticket-based technique for dependency tracking and enforcement at the object level. Our scheduler also supports renaming of objects in order to increase task-level parallelism. Renaming is implemented using versioned objects, a new type of hyper object. Experimental evaluation shows that the unified scheduler is as efficient as the Cilk scheduler when tasks have no dependencies. Moreover, the unified scheduler is more efficient than SMPSS, a particular implementation of a task dataflow language.
Resumo:
A diary study tracked the paper documents received by nine UK informants over one month. Informants gave simple ratings of individual documents’ attractiveness and the ease of understanding them; more detailed reactions to the documents were gathered through informant diaries and follow-up interviews. The detailed reactions extended beyond the feedback gathered through the rating task. Informants showed sensitivity to the content, language, design and circumstances of receipt of documents, with indications that they developed opinions of originating organizations based on their experience of using their documents. Documents that failed to provide all the information needed, that failed to make their intentions clear (or obscured their intentions) or that were perceived as miss-targeted received negative comment. Repeat experiences of receiving either well- or poorly-conceived documents strengthened informant reactions to individual originating organizations. The paper concludes with recommendations for steps document originators, writers and designers need to take to prepare documents that enhance organization to consumer communication. We recommend that organizations evaluate and act on consumers’ reactions to their documents, beyond user testing in document development or scorecard ratings in use.
Resumo:
The visual identity is based on a semantic relationship of several signs that make up a coherent system. A bimédia language formed by text and image complement to create an understandable message. This study aims the use of non-verbal communication in the corporate visual identity design project, contextualizing the role of the designer as mediator for informational corporate message to their audiences.
Resumo:
While logic programming languages offer a great deal of scope for parallelism, there is usually some overhead associated with the execution of goals in parallel because of the work involved in task creation and scheduling. In practice, therefore, the "granularity" of a goal, i.e. an estimate of the work available under it, should be taken into account when deciding whether or not to execute a goal concurrently as a sepárate task. This paper describes a method for estimating the granularity of a goal at compile time. The runtime overhead associated with our approach is usually quite small, and the performance improvements resulting from the incorporation of grainsize control can be quite good. This is shown by means of experimental results.
Resumo:
El trabajo pretende mostrar los estereotipos de hombre y mujer en la sociedad occidental según los estudios de género para, más tarde, comprobar si dichos estereotipos se reflejan en la fraseología checa y española relativa a animales, es decir, en los zoologismos. El análisis se sustenta en las teorías de la lingüística cognitiva acerca de la metáfora conceptual y del lenguaje figurado convencional. Las conclusiones muestran una clara discriminación de ambos géneros en el lenguaje, siendo el femenino más afectado que el masculino.
Resumo:
Las etapas del cambio fonético-fonológico han sido descritas desde hace décadas, especialmente desde un punto de vista articulatorio y casi siempre partiendo de los testimonios escritos de que se podía disponer. No obstante, recientemente han ido surgiendo nuevas teorías que defienden que el cambio puede ser explicado a través del estudio de la variación y los procesos fonéticos propios del habla actual, puesto que ambos están relacionados con fenómenos de hipo (e hiper) articulación y, a la postre, de coarticulación. Una de ellas es la Fonología Evolutiva (Blevins 2004), aun cuando no ofrece una explicación satisfactoria para la difusión del cambio. En este estudio, se ha recurrido a estas teorías para esclarecer las causas de la evolución de dos contextos de yod segunda: /nj/ y /lj/, que llevaron a la fonologización de // y //, en un primer estadio de la historia del español.