119 resultados para Programmer


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Currently, applications for smartphones and tablets, called apps, are becoming increasingly relevant and attract more attention from users and finally the developers. With the Application Stores, services provided by the company that maintains the platform, access to such applications is as or more simplified than to web sites, with the advantage of anenhanced user experience and focused on the mobile device, and enjoy natives resources as camera, audio, storage, integration with other applications, etc. They present a great opportunity for independent developers, who can now develop an application and make it availabl e to all users of that platform, at free or at a cost that is usually low. Even students may create their applications in the intervals of their classes and sell them in stores. Making use of tools and services, free or at low cost, anyone can develop quality applications, that can be marketed and have a large number of users even in adverse situations in which the application is not the focus of developer productivity. However, such to ols do not seem to be well used, or are unknown, or its purpose is not considered important, and this paper tries to show the real importance of these tools in the rapid development of quality software. This project presents several tools, services and practices, which together make it possible to develop an application for various mobile platforms, independently and with a team of a few people, as demonstrated. However, this paper aims not to say that the development of software today it is easy and simple, but there are currently a large set of tools, for various platforms, that assists and enhances the work of the programmer

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Identify opportunities for software parallelism is a task that takes a lot of human time, but once some code patterns for parallelism are identified, a software could quickly accomplish this task. Thus, automating this process brings many benefits such as saving time and reducing errors caused by the programmer [1]. This work aims at developing a software environment that identifies opportunities for parallelism in a source code written in C language, and generates a program with the same behavior, but with higher degree of parallelism, compatible with a graphics processor compatible with CUDA architecture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Field-Programmable Gate Arrays (FPGAs) are becoming increasingly important in embedded and high-performance computing systems. They allow performance levels close to the ones obtained with Application-Specific Integrated Circuits, while still keeping design and implementation flexibility. However, to efficiently program FPGAs, one needs the expertise of hardware developers in order to master hardware description languages (HDLs) such as VHDL or Verilog. Attempts to furnish a high-level compilation flow (e.g., from C programs) still have to address open issues before broader efficient results can be obtained. Bearing in mind an FPGA available resources, it has been developed LALP (Language for Aggressive Loop Pipelining), a novel language to program FPGA-based accelerators, and its compilation framework, including mapping capabilities. The main ideas behind LALP are to provide a higher abstraction level than HDLs, to exploit the intrinsic parallelism of hardware resources, and to allow the programmer to control execution stages whenever the compiler techniques are unable to generate efficient implementations. Those features are particularly useful to implement loop pipelining, a well regarded technique used to accelerate computations in several application domains. This paper describes LALP, and shows how it can be used to achieve high-performance computing solutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The miniaturization race in the hardware industry aiming at continuous increasing of transistor density on a die does not bring respective application performance improvements any more. One of the most promising alternatives is to exploit a heterogeneous nature of common applications in hardware. Supported by reconfigurable computation, which has already proved its efficiency in accelerating data intensive applications, this concept promises a breakthrough in contemporary technology development. Memory organization in such heterogeneous reconfigurable architectures becomes very critical. Two primary aspects introduce a sophisticated trade-off. On the one hand, a memory subsystem should provide well organized distributed data structure and guarantee the required data bandwidth. On the other hand, it should hide the heterogeneous hardware structure from the end-user, in order to support feasible high-level programmability of the system. This thesis work explores the heterogeneous reconfigurable hardware architectures and presents possible solutions to cope the problem of memory organization and data structure. By the example of the MORPHEUS heterogeneous platform, the discussion follows the complete design cycle, starting from decision making and justification, until hardware realization. Particular emphasis is made on the methods to support high system performance, meet application requirements, and provide a user-friendly programmer interface. As a result, the research introduces a complete heterogeneous platform enhanced with a hierarchical memory organization, which copes with its task by means of separating computation from communication, providing reconfigurable engines with computation and configuration data, and unification of heterogeneous computational devices using local storage buffers. It is distinguished from the related solutions by distributed data-flow organization, specifically engineered mechanisms to operate with data on local domains, particular communication infrastructure based on Network-on-Chip, and thorough methods to prevent computation and communication stalls. In addition, a novel advanced technique to accelerate memory access was developed and implemented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramount, with Moore’s Law being the leading factor of this trend. Today in fact an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges. As a result of the increased silicon density of modern Systems-on-a-Chip (SoC), the design space exploration needed to find the best design has exploded and hardware designers are in fact facing the problem of a huge design space. Virtual Platforms have always been used to enable hardware-software co-design, but today they are facing with the huge complexity of both hardware and software systems. In this thesis two different research works on Virtual Platforms are presented: the first one is intended for the hardware developer, to easily allow complex cycle accurate simulations of many-core SoCs. The second work exploits the parallel computing power of off-the-shelf General Purpose Graphics Processing Units (GPGPUs), with the goal of an increased simulation speed. The term Virtualization can be used in the context of many-core systems not only to refer to the aforementioned hardware emulation tools (Virtual Platforms), but also for two other main purposes: 1) to help the programmer to achieve the maximum possible performance of an application, by hiding the complexity of the underlying hardware. 2) to efficiently exploit the high parallel hardware of many-core chips in environments with multiple active Virtual Machines. This thesis is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we present ad study an object-oriented language, characterized by two different types of objects, passive and active objects, of which we define the operational syntax and semantics. For this language we also define the type system, that will be used for the type checking and for the extraction of behavioral types, which are an abstract description of the behavior of the methods, used in deadlock analysis. Programs can manifest deadlock due to the errors of the programmer. To statically identify possible unintended behaviors we studied and implemented a technique for the analysis of deadlock based on behavioral types.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

After decades of development in programming languages and programming environments, Smalltalk is still one of few environments that provide advanced features and is still widely used in the industry. However, as Java became prevalent, the ability to call Java code from Smalltalk and vice versa becomes important. Traditional approaches to integrate the Java and Smalltalk languages are through low-level communication between separate Java and Smalltalk virtual machines. We are not aware of any attempt to execute and integrate the Java language directly in the Smalltalk environment. A direct integration allows for very tight and almost seamless integration of the languages and their objects within a single environment. Yet integration and language interoperability impose challenging issues related to method naming conventions, method overloading, exception handling and thread-locking mechanisms. In this paper we describe ways to overcome these challenges and to integrate Java into the Smalltalk environment. Using techniques described in this paper, the programmer can call Java code from Smalltalk using standard Smalltalk idioms while the semantics of each language remains preserved. We present STX:LIBJAVA - an implementation of Java virtual machine within Smalltalk/X - as a validation of our approach

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data visualization is the process of representing data as pictures to support reasoning about the underlying data. For the interpretation to be as easy as possible, we need to be as close as possible to the original data. As most visualization tools have an internal meta-model, which is different from the one for the presented data, they usually need to duplicate the original data to conform to their meta-model. This leads to an increase in the resources needed, increase which is not always justified. In this work we argue for the need of having an engine that is as close as possible to the data and we present our solution of moving the visualization tool to the data, instead of moving the data to the visualization tool. Our solution also emphasizes the necessity of reusing basic blocks to express complex visualizations and allowing the programmer to script the visualization using his preferred tools, rather than a third party format. As a validation of the expressiveness of our framework, we show how we express several already published visualizations and describe the pros and cons of the approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Industrial software systems are large and complex, both in terms of the software entities and their relationships. Consequently, understanding how a software system works requires the ability to pose queries over the design-level entities of the system. Traditionally, this task has been supported by simple tools (e.g., grep) combined with the programmer's intuition and experience. Recently, however, specialized code query technologies have matured to the point where they can be used in industrial situations, providing more intelligent, timely, and efficient responses to developer queries. This working session aims to explore the state of the art in code query technologies, and discover new ways in which these technologies may be useful in program comprehension. The session brings together researchers and practitioners. We survey existing techniques and applications, trying to understand the strengths and weaknesses of the various approaches, and sketch out new frontiers that hold promise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Imprecise manipulation of source code (semi-parsing) is useful for tasks such as robust parsing, error recovery, lexical analysis, and rapid development of parsers for data extraction. An island grammar precisely defines only a subset of a language syntax (islands), while the rest of the syntax (water) is defined imprecisely. Usually, water is defined as the negation of islands. Albeit simple, such a definition of water is naive and impedes composition of islands. When developing an island grammar, sooner or later a programmer has to create water tailored to each individual island. Such an approach is fragile, however, because water can change with any change of a grammar. It is time-consuming, because water is defined manually by a programmer and not automatically. Finally, an island surrounded by water cannot be reused because water has to be defined for every grammar individually. In this paper we propose a new technique of island parsing - bounded seas. Bounded seas are composable, robust, reusable and easy to use because island-specific water is created automatically. We integrated bounded seas into a parser combinator framework as a demonstration of their composability and reusability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En este proyecto se ha diseñado un sistema de adquisición y uso compartido de datos orientado a la implantación en un vehículo monoplaza de Formula SAE. Más concretamente, se encarga de recoger la información proporcionada por cuatro sensores infrarrojos de temperatura que sondearán constantemente la temperatura a la que se encuentran las ruedas del vehículo. La información, recogida en una memoria de almacenamiento masivo, se compartirá con otros dispositivos mediante un bus común. Los sensores empleados para generar la información los proporciona Melexis. Dichos sensores permiten estar todos simultáneamente conectados en un bus común gracias a su electrónica interna. Mediante el bus I2C irán conectados los cuatro sensores de nuestra aplicación (uno por cada rueda) permitiéndose añadir a posteriori más sensores o incluso otros elementos que permitan la comunicación por este tipo de bus I2C. La gestión de las tareas se realiza mediante el microcontrolador DSPIC33FJ256GP710-I/PF proporcionado por Microchip. Este es un microcontrolador complejo, por lo que para nuestra aplicación desaprovecharemos parte de su potencial. En nuestra tarjeta ha sido solamente añadido el uso de los dos I2C (uno para la tarjeta SD y el otro para los sensores), el módulo ECAN1 (para las comunicaciones por bus CAN), el módulo SPI (para acceder a una memoria Flash), 4 ADCs (para posibles mediciones) y 2 entradas de interrupción (para posible interactuación con el usuario), a parte de los recursos internos necesarios. En este proyecto se realiza tanto el desarrollo de una tarjeta de circuito impreso dedicada a resolver la funcionalidad requerida, así como su programación a través del entorno de programación facilitado por Microchip, el ICD2 Programmer. In this project, an acquisition and sharing system of data, which is oriented to be installed in a Formula SAE single-seater vehicle, has been designed. Concretely, it is responsible for getting the information supplied by four IR temperature sensors that monitor the wheels temperature. The information, which is loaded in a massive storage memory, will be shared with other devices by means of a common bus. The sensors used to generate the information are supplied by Melexis. Such specific sensors let that all they can be connected to the same bus at the same time due to their internal electronic. The four sensors will be connected through an I2C bus, one for each wheel, although we could add later more sensors or even other devices that they were able to let the I2C communication. Tasks management will be done by means of the DSPIC33FJ256GP710-I/PF microcontroller, which will be supplied by Microchip. This is a complex microcontroller, so, in our application we waste off a part of its potential. In our PCB has only been incorporated the use of the two I2C (one for the SD card and the other for the sensors), the ECAN module (to communicate devices), the SPI module (to access to the Flash memory), 4 ADC’s (for possible measurements) and 2 interrupt inputs (for possible inter-action with the user), a part of the necessary internal resources. This project aims the PCB development dedicated to solve the requested functionality and its programming through the programming environment provided by Microchip (the ICD2 programmer).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A number of data description languages initially designed as standards for trie WWW are currently being used to implement user interfaces to programs. This is done independently of whether such programs are executed in the same or a different host as trie one running the user interface itself. The advantage of this approach is that it provides a portable, standardized, and easy to use solution for the application programmer, and a familiar behavior for the user, typically well versed in the use of WWW browsers. Among the proposed standard description languages, VRML is a aimed at representing three dimensional scenes including hyperlink capabilities. VRML is already used as an import/export format in many 3-D packages and tools, and has been shown effective in displaying complex objects and scenarios. We propose and describe a Prolog library which allows parsing and checking VRML code, transforming it, and writing it out as VRML again. The library converts such code to an internal representation based on first order terms which can then be arbitrarily manipulated. We also present as an example application the use of this library to implement a novel 3-D visualization for examining and understanding certain aspects of the behavior of CLP(FD) programs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents and develops a generalized concept of Non-Strict Independent And Parallelism (NSIAP). NSIAP extends the applicability of Independent And- Parallelism (IAP) by enlarging the class of goals which are eligible for parallel execution. At the same time it maintains IAP's ability to run non-deterministic goals in parallel and to preserve the computational complexity expected in the execution of the program by the programmer. First, a parallel execution framework is defined and some fundamental correctness results, in the sense of equivalence of solutions with the sequential model, are discussed for this framework. The issue of efficiency is then considered. Two new definitions of NSI are given for the cases of puré and impure goals respectively and efficiency results are provided for programs parallelized under these definitions which include treatment of the case of goal failure: not only is reduction of execution time guaranteed (modulo run-time overheads) in the absence of failure but it is also shown that in the worst case of failure no speed-down will occur. In addition to applying to NSI, these results carry over and complete previous results shown in the context of IAP which did not deal with the case of goal failure. Finally, some practical examples of the application of the NSIAP concept to the parallelization of a set of programs are presented and performance results, showing the advantage of using NSI, are given.