826 resultados para Implementação
Resumo:
This work aims at the implementation and adaptation of a computational model for the study of the Fischer-Tropsch reaction in a slurry bed reactor from synthesis gas (CO+H2) for the selective production of hydrocarbons (CnHm), with emphasis on evaluation of the influence of operating conditions on the distribution of products formed during the reaction.The present model takes into account effects of rigorous phase equilibrium in a reactive flash drum, a detailed kinetic model able of predicting the formation of each chemical species of the reaction system, as well as control loops of the process variables for pressure and level of slurry phase. As a result, a system of Differential Algebraic Equations was solved using the computational code DASSL (Petzold, 1982). The consistent initialization for the problem was based on phase equilibrium formed by the existing components in the reactor. In addition, the index of the system was reduced to 1 by the introduction of control laws that govern the output of the reactor products. The results were compared qualitatively with experimental data collected in the Fischer-Tropsch Synthesis plant installed at Laboratório de Processamento de Gás - CTGÁS-ER-Natal/RN
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Considering the theoretical and methodological presuppositions of Variationist Sociolinguistics (cf. WEINREICH; LABOV; HERZOG, 2006; LABOV, [1972] 2008), in this dissertation, we describe and analyze the process of variation/change involving the personal pronouns tu and você, and its extension in the pronominal paradigm in Brazilian Portuguese (BP), in three sets of personal letters written by people from Rio Grande do Norte (RN) along the 20th century. The discursive universe of those letters is news from the cities in which the informers lived and the themes from their everyday life (trade, jobs, trips, family and politics). Part of the analyzed letters integrate the written by hand minimum corpus of the Projeto de História do Português Brasileiro no Rio Grande do Norte (PHPB-RN). We are based on previous studies about the pronominal system in BP Menon (1995), Faraco (1996), Lopes e Machado (2005), Rumeu (2008), Lopes (2009), Lopes, Rumeu e Marcotulio (2011), Lopes e Marcotulio (2011) e Martins e Moura (2012) , which register the form você replaces tu from the end of the first half of 20th century and attest the following situation: while (a) the imperative verbal forms, (b) the explicit subjects and (c) prepositional complement pronouns are favorable contexts for você, the (d) non imperative verbal forms (with null subject), (e) the non prepositional complement pronoun and (f) the possessive pronoun are contexts of resistance of tu. The results got in this dissertation confirm, partially, the statements defended by the previous studies regarding the favorable contexts for the implementation of você in BP: (i) there are, in the letters from the first two decades of 20th century (1916 to 1925), high frequency of the usage of the form você (98%); (ii) in the personal letters of RN especially in the love letters, in which there are higher recurrence of intimate subjects the discursive universe proved to be itself very relevant in the determination/conditions of the forms of tu; (iii) the unique feminine informer of our sample uses, almost categorically, the forms of tu in letters of the period from 1946 to 1972; (iv) the letters corresponding to the period from 1992 to 1994 present a significant usage of the forms associated to the innovating você, letting appear the change is already implemented in the system of BP and there are, in that set of letters, strong evidences that make us state the pronominal forms of non prepositional complement (accusative/ dative) related to tu are also implemented in a system with an almost categorical usage of você
Resumo:
The purpose of this study is to describe the implementation of the Low Energy Electron Diffaction (LEED) technique in the Laboratory of Magnetic Nanostructures and Semiconductors of the Department of Theoretical and Experimental Physics of the Universidade Federal do Rio Grande do Norte (UFRN), Natal, Brazil. During this work experimental apparatus were implemented for a complete LEED set-up. A new vacuum system was also set up. This was composed of a mechanical pump, turbomolecular pump and ionic pump for ultra-high vacuum and their respective pressure measurement sensors (Pirani gauge for low vacuum measures and the wide range gauge -WRG); ion cannon maintenance, which is basically mini-sputtering, whose function is sample cleaning; and set-up, maintenance and handling of the quadrupole mass spectrometer, whose main purpose is to investigate gas contamination inside the ultra-high vacuum chamber. It should be pointed out that the main contribution of this Master's thesis was the set-up of the sample heating system; that is, a new sample holder. In addition to the function of sample holder and heater, it was necessary to implement the function of sustaining the ultra-high vacuum environment. This set of actions is essential for the complete functioning of the LEED technique
Resumo:
The considerable expansion of Distance Education registered in recent years in Brazil raises the importance of debate about how the implementation of this policy has been happening so that formulators and implementers make better informed decisions, maximizing results, identifying successes and overcoming bottlenecks. This study aims to evaluate the implementation process of Distance Education policy by Secretary of Distance Education of the Federal University of Rio Grande do Norte. For this, we sought to use an evaluation proposal consistent with this policy, and came to the one developed by Sonia Draibe (2001), which suggests an analysis called anatomy of evaluation general process. To achieve the objectives, we made a qualitative research, case study type, using documentary research and semi-structured interviews with three groups of subjects who belong to the policy: managers, technicians and beneficiaries. It was concluded that: the implementation process needs a open contact channel between the management and technicians and beneficiaries; the lack of clarity in the dissemination of information between technicians produces noises that affects the outcomes; the absence of dissemination of internal and external actions contributes to the perpetuation of prejudice in relation to Distance Education; using selection criteria based on competence and merit contributes to form a team of skilled technicians to perform their function within the policy; an institution that do not enable technicians generates gaps that possibly will turn into policy implementation failures; all subjects involved in politics need internal evaluations to contribute to improvements in the implementation process, however, a gap is opened between the subjects if there is no socialization of results; the existence of an internal structure that manipulates financial resources and balances the budget from different maintainer programs is essencial; the consortium between IES and municipalities in presential support poles are bottlenecks in the process, since beneficiaries are exposed to inconsistency and lack of commitment of these local municipalities
Resumo:
This study aims to analyzing the implementation of the Matrix Support proposal with professionals of Substitutive Services in Mental Health in the city of Natal/RN. The Matrix Support (MS) is an institutional arrangement which has been recently adopted by the Health Ministry, as an administrative strategy, for the construction of a wide care net in Mental Health, deviating the logic of indiscriminate follow-through changed by one of co-responsibility. In addition to this, its goal is to promote a major resolution as regards health assistance. Integral attention, as it is intended by the unique health system, may be reached by means of knowledge and practices interchange, establishing an interdisciplinary work logic, through an interconnected net of health services. For the accomplishment of this study, individual interviews of semi-structured character were used as instrument, with the coordinators and technical staff of the CAPs. The data collection was done in the following services: CAPS II ( East and West) and CAPS ad ( North and East), in the city of Natal/RN. The results point out that the CAPs to initiate of the discussion the process in the implementation of the MS aiming, to promote the reorganization and redefinition of the flow in the net, thus not acting in a fragmented way. Nevertheless, there is no effective articulation concerning the basic attention services, there is a major focus of the attention in mental health on the specialized services, little insertion in the territory and in the everyday life of the community
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Nowadays, the importance of using software processes is already consolidated and is considered fundamental to the success of software development projects. Large and medium software projects demand the definition and continuous improvement of software processes in order to promote the productive development of high-quality software. Customizing and evolving existing software processes to address the variety of scenarios, technologies, culture and scale is a recurrent challenge required by the software industry. It involves the adaptation of software process models for the reality of their projects. Besides, it must also promote the reuse of past experiences in the definition and development of software processes for the new projects. The adequate management and execution of software processes can bring a better quality and productivity to the produced software systems. This work aimed to explore the use and adaptation of consolidated software product lines techniques to promote the management of the variabilities of software process families. In order to achieve this aim: (i) a systematic literature review is conducted to identify and characterize variability management approaches for software processes; (ii) an annotative approach for the variability management of software process lines is proposed and developed; and finally (iii) empirical studies and a controlled experiment assess and compare the proposed annotative approach against a compositional one. One study a comparative qualitative study analyzed the annotative and compositional approaches from different perspectives, such as: modularity, traceability, error detection, granularity, uniformity, adoption, and systematic variability management. Another study a comparative quantitative study has considered internal attributes of the specification of software process lines, such as modularity, size and complexity. Finally, the last study a controlled experiment evaluated the effort to use and the understandability of the investigated approaches when modeling and evolving specifications of software process lines. The studies bring evidences of several benefits of the annotative approach, and the potential of integration with the compositional approach, to assist the variability management of software process lines
Resumo:
Formal methods should be used to specify and verify on-card software in Java Card applications. Furthermore, Java Card programming style requires runtime verification of all input conditions for all on-card methods, where the main goal is to preserve the data in the card. Design by contract, and in particular, the JML language, are an option for this kind of development and verification, as runtime verification is part of the Design by contract method implemented by JML. However, JML and its currently available tools for runtime verification were not designed with Java Card limitations in mind and are not Java Card compliant. In this thesis, we analyze how much of this situation is really intrinsic of Java Card limitations and how much is just a matter of a complete re-design of JML and its tools. We propose the requirements for a new language which is Java Card compliant and indicate the lines on which a compiler for this language should be built. JCML strips from JML non-Java Card aspects such as concurrency and unsupported types. This would not be enough, however, without a great effort in optimization of the verification code generated by its compiler, as this verification code must run on the card. The JCML compiler, although being much more restricted than the one for JML, is able to generate Java Card compliant verification code for some lightweight specifications. As conclusion, we present a Java Card compliant variant of JML, JCML (Java Card Modeling Language), with a preliminary version of its compiler
Resumo:
Programs manipulate information. However, information is abstract in nature and needs to be represented, usually by data structures, making it possible to be manipulated. This work presents the AGraphs, a representation and exchange format of the data that uses typed directed graphs with a simulation of hyperedges and hierarchical graphs. Associated to the AGraphs format there is a manipulation library with a simple programming interface, tailored to the language being represented. The AGraphs format in ad-hoc manner was used as representation format in tools developed at UFRN, and, to make it more usable in other tools, an accurate description and the development of support tools was necessary. These accurate description and tools have been developed and are described in this work. This work compares the AGraphs format with other representation and exchange formats (e.g ATerms, GDL, GraphML, GraX, GXL and XML). The main objective this comparison is to capture important characteristics and where the AGraphs concepts can still evolve
Resumo:
The increase of applications complexity has demanded hardware even more flexible and able to achieve higher performance. Traditional hardware solutions have not been successful in providing these applications constraints. General purpose processors have inherent flexibility, since they perform several tasks, however, they can not reach high performance when compared to application-specific devices. Moreover, since application-specific devices perform only few tasks, they achieve high performance, although they have less flexibility. Reconfigurable architectures emerged as an alternative to traditional approaches and have become an area of rising interest over the last decades. The purpose of this new paradigm is to modify the device s behavior according to the application. Thus, it is possible to balance flexibility and performance and also to attend the applications constraints. This work presents the design and implementation of a coarse grained hybrid reconfigurable architecture to stream-based applications. The architecture, named RoSA, consists of a reconfigurable logic attached to a processor. Its goal is to exploit the instruction level parallelism from intensive data-flow applications to accelerate the application s execution on the reconfigurable logic. The instruction level parallelism extraction is done at compile time, thus, this work also presents an optimization phase to the RoSA architecture to be included in the GCC compiler. To design the architecture, this work also presents a methodology based on hardware reuse of datapaths, named RoSE. RoSE aims to visualize the reconfigurable units through reusability levels, which provides area saving and datapath simplification. The architecture presented was implemented in hardware description language (VHDL). It was validated through simulations and prototyping. To characterize performance analysis some benchmarks were used and they demonstrated a speedup of 11x on the execution of some applications
Resumo:
Motion estimation is the main responsible for data reduction in digital video encoding. It is also the most computational damanding step. H.264 is the newest standard for video compression and was planned to double the compression ratio achievied by previous standards. It was developed by the ITU-T Video Coding Experts Group (VCEG) together with the ISO/IEC Moving Picture Experts Group (MPEG) as the product of a partnership effort known as the Joint Video Team (JVT). H.264 presents novelties that improve the motion estimation efficiency, such as the adoption of variable block-size, quarter pixel precision and multiple reference frames. This work defines an architecture for motion estimation in hardware/software, using a full search algorithm, variable block-size and mode decision. This work consider the use of reconfigurable devices, soft-processors and development tools for embedded systems such as Quartus II, SOPC Builder, Nios II and ModelSim
Resumo:
This work presents the concept, design and implementation of a MP-SoC platform, named STORM (MP-SoC DirecTory-Based PlatfORM). Currently the platform is composed of the following modules: SPARC V8 processor, GPOP processor, Cache module, Memory module, Directory module and two different modles of Network-on-Chip, NoCX4 and Obese Tree. All modules were implemented using SystemC, simulated and validated, individually or in group. The modules description is presented in details. For programming the platform in C it was implemented a SPARC assembler, fully compatible with gcc s generated assembly code. For the parallel programming it was implemented a library for mutex managing, using the due assembler s support. A total of 10 simulations of increasing complexity are presented for the validation of the presented concepts. The simulations include real parallel applications, such as matrix multiplication, Mergesort, KMP, Motion Estimation and DCT 2D
Resumo:
Through the adoption of the software product line (SPL) approach, several benefits are achieved when compared to the conventional development processes that are based on creating a single software system at a time. The process of developing a SPL differs from traditional software construction, since it has two essential phases: the domain engineering - when common and variables elements of the SPL are defined and implemented; and the application engineering - when one or more applications (specific products) are derived from the reuse of artifacts created in the domain engineering. The test activity is also fundamental and aims to detect defects in the artifacts produced in SPL development. However, the characteristics of an SPL bring new challenges to this activity that must be considered. Several approaches have been recently proposed for the testing process of product lines, but they have been shown limited and have only provided general guidelines. In addition, there is also a lack of tools to support the variability management and customization of automated case tests for SPLs. In this context, this dissertation has the goal of proposing a systematic approach to software product line testing. The approach offers: (i) automated SPL test strategies to be applied in the domain and application engineering, (ii) explicit guidelines to support the implementation and reuse of automated test cases at the unit, integration and system levels in domain and application engineering; and (iii) tooling support for automating the variability management and customization of test cases. The approach is evaluated through its application in a software product line for web systems. The results of this work have shown that the proposed approach can help the developers to deal with the challenges imposed by the characteristics of SPLs during the testing process
Resumo:
The Reconfigurable Computing is an intermediate solution at the resolution of complex problems, making possible to combine the speed of the hardware with the flexibility of the software. An reconfigurable architecture possess some goals, among these the increase of performance. The use of reconfigurable architectures to increase the performance of systems is a well known technology, specially because of the possibility of implementing certain slow algorithms in the current processors directly in hardware. Amongst the various segments that use reconfigurable architectures the reconfigurable processors deserve a special mention. These processors combine the functions of a microprocessor with a reconfigurable logic and can be adapted after the development process. Reconfigurable Instruction Set Processors (RISP) are a subgroup of the reconfigurable processors, that have as goal the reconfiguration of the instruction set of the processor, involving issues such formats, operands and operations of the instructions. This work possess as main objective the development of a RISP processor, combining the techniques of configuration of the set of executed instructions of the processor during the development, and reconfiguration of itself in execution time. The project and implementation in VHDL of this RISP processor has as intention to prove the applicability and the efficiency of two concepts: to use more than one set of fixed instructions, with only one set active in a given time, and the possibility to create and combine new instructions, in a way that the processor pass to recognize and use them in real time as if these existed in the fixed set of instruction. The creation and combination of instructions is made through a reconfiguration unit, incorporated to the processor. This unit allows the user to send custom instructions to the processor, so that later he can use them as if they were fixed instructions of the processor. In this work can also be found simulations of applications involving fixed and custom instructions and results of the comparisons between these applications in relation to the consumption of power and the time of execution, which confirm the attainment of the goals for which the processor was developed