89 resultados para Implementação BIM


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study is to describe the implementation of the Low Energy Electron Diffaction (LEED) technique in the Laboratory of Magnetic Nanostructures and Semiconductors of the Department of Theoretical and Experimental Physics of the Universidade Federal do Rio Grande do Norte (UFRN), Natal, Brazil. During this work experimental apparatus were implemented for a complete LEED set-up. A new vacuum system was also set up. This was composed of a mechanical pump, turbomolecular pump and ionic pump for ultra-high vacuum and their respective pressure measurement sensors (Pirani gauge for low vacuum measures and the wide range gauge -WRG); ion cannon maintenance, which is basically mini-sputtering, whose function is sample cleaning; and set-up, maintenance and handling of the quadrupole mass spectrometer, whose main purpose is to investigate gas contamination inside the ultra-high vacuum chamber. It should be pointed out that the main contribution of this Master's thesis was the set-up of the sample heating system; that is, a new sample holder. In addition to the function of sample holder and heater, it was necessary to implement the function of sustaining the ultra-high vacuum environment. This set of actions is essential for the complete functioning of the LEED technique

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The considerable expansion of Distance Education registered in recent years in Brazil raises the importance of debate about how the implementation of this policy has been happening so that formulators and implementers make better informed decisions, maximizing results, identifying successes and overcoming bottlenecks. This study aims to evaluate the implementation process of Distance Education policy by Secretary of Distance Education of the Federal University of Rio Grande do Norte. For this, we sought to use an evaluation proposal consistent with this policy, and came to the one developed by Sonia Draibe (2001), which suggests an analysis called anatomy of evaluation general process. To achieve the objectives, we made a qualitative research, case study type, using documentary research and semi-structured interviews with three groups of subjects who belong to the policy: managers, technicians and beneficiaries. It was concluded that: the implementation process needs a open contact channel between the management and technicians and beneficiaries; the lack of clarity in the dissemination of information between technicians produces noises that affects the outcomes; the absence of dissemination of internal and external actions contributes to the perpetuation of prejudice in relation to Distance Education; using selection criteria based on competence and merit contributes to form a team of skilled technicians to perform their function within the policy; an institution that do not enable technicians generates gaps that possibly will turn into policy implementation failures; all subjects involved in politics need internal evaluations to contribute to improvements in the implementation process, however, a gap is opened between the subjects if there is no socialization of results; the existence of an internal structure that manipulates financial resources and balances the budget from different maintainer programs is essencial; the consortium between IES and municipalities in presential support poles are bottlenecks in the process, since beneficiaries are exposed to inconsistency and lack of commitment of these local municipalities

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study aims to analyzing the implementation of the Matrix Support proposal with professionals of Substitutive Services in Mental Health in the city of Natal/RN. The Matrix Support (MS) is an institutional arrangement which has been recently adopted by the Health Ministry, as an administrative strategy, for the construction of a wide care net in Mental Health, deviating the logic of indiscriminate follow-through changed by one of co-responsibility. In addition to this, its goal is to promote a major resolution as regards health assistance. Integral attention, as it is intended by the unique health system, may be reached by means of knowledge and practices interchange, establishing an interdisciplinary work logic, through an interconnected net of health services. For the accomplishment of this study, individual interviews of semi-structured character were used as instrument, with the coordinators and technical staff of the CAPs. The data collection was done in the following services: CAPS II ( East and West) and CAPS ad ( North and East), in the city of Natal/RN. The results point out that the CAPs to initiate of the discussion the process in the implementation of the MS aiming, to promote the reorganization and redefinition of the flow in the net, thus not acting in a fragmented way. Nevertheless, there is no effective articulation concerning the basic attention services, there is a major focus of the attention in mental health on the specialized services, little insertion in the territory and in the everyday life of the community

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays, the importance of using software processes is already consolidated and is considered fundamental to the success of software development projects. Large and medium software projects demand the definition and continuous improvement of software processes in order to promote the productive development of high-quality software. Customizing and evolving existing software processes to address the variety of scenarios, technologies, culture and scale is a recurrent challenge required by the software industry. It involves the adaptation of software process models for the reality of their projects. Besides, it must also promote the reuse of past experiences in the definition and development of software processes for the new projects. The adequate management and execution of software processes can bring a better quality and productivity to the produced software systems. This work aimed to explore the use and adaptation of consolidated software product lines techniques to promote the management of the variabilities of software process families. In order to achieve this aim: (i) a systematic literature review is conducted to identify and characterize variability management approaches for software processes; (ii) an annotative approach for the variability management of software process lines is proposed and developed; and finally (iii) empirical studies and a controlled experiment assess and compare the proposed annotative approach against a compositional one. One study a comparative qualitative study analyzed the annotative and compositional approaches from different perspectives, such as: modularity, traceability, error detection, granularity, uniformity, adoption, and systematic variability management. Another study a comparative quantitative study has considered internal attributes of the specification of software process lines, such as modularity, size and complexity. Finally, the last study a controlled experiment evaluated the effort to use and the understandability of the investigated approaches when modeling and evolving specifications of software process lines. The studies bring evidences of several benefits of the annotative approach, and the potential of integration with the compositional approach, to assist the variability management of software process lines

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Formal methods should be used to specify and verify on-card software in Java Card applications. Furthermore, Java Card programming style requires runtime verification of all input conditions for all on-card methods, where the main goal is to preserve the data in the card. Design by contract, and in particular, the JML language, are an option for this kind of development and verification, as runtime verification is part of the Design by contract method implemented by JML. However, JML and its currently available tools for runtime verification were not designed with Java Card limitations in mind and are not Java Card compliant. In this thesis, we analyze how much of this situation is really intrinsic of Java Card limitations and how much is just a matter of a complete re-design of JML and its tools. We propose the requirements for a new language which is Java Card compliant and indicate the lines on which a compiler for this language should be built. JCML strips from JML non-Java Card aspects such as concurrency and unsupported types. This would not be enough, however, without a great effort in optimization of the verification code generated by its compiler, as this verification code must run on the card. The JCML compiler, although being much more restricted than the one for JML, is able to generate Java Card compliant verification code for some lightweight specifications. As conclusion, we present a Java Card compliant variant of JML, JCML (Java Card Modeling Language), with a preliminary version of its compiler

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Programs manipulate information. However, information is abstract in nature and needs to be represented, usually by data structures, making it possible to be manipulated. This work presents the AGraphs, a representation and exchange format of the data that uses typed directed graphs with a simulation of hyperedges and hierarchical graphs. Associated to the AGraphs format there is a manipulation library with a simple programming interface, tailored to the language being represented. The AGraphs format in ad-hoc manner was used as representation format in tools developed at UFRN, and, to make it more usable in other tools, an accurate description and the development of support tools was necessary. These accurate description and tools have been developed and are described in this work. This work compares the AGraphs format with other representation and exchange formats (e.g ATerms, GDL, GraphML, GraX, GXL and XML). The main objective this comparison is to capture important characteristics and where the AGraphs concepts can still evolve

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increase of applications complexity has demanded hardware even more flexible and able to achieve higher performance. Traditional hardware solutions have not been successful in providing these applications constraints. General purpose processors have inherent flexibility, since they perform several tasks, however, they can not reach high performance when compared to application-specific devices. Moreover, since application-specific devices perform only few tasks, they achieve high performance, although they have less flexibility. Reconfigurable architectures emerged as an alternative to traditional approaches and have become an area of rising interest over the last decades. The purpose of this new paradigm is to modify the device s behavior according to the application. Thus, it is possible to balance flexibility and performance and also to attend the applications constraints. This work presents the design and implementation of a coarse grained hybrid reconfigurable architecture to stream-based applications. The architecture, named RoSA, consists of a reconfigurable logic attached to a processor. Its goal is to exploit the instruction level parallelism from intensive data-flow applications to accelerate the application s execution on the reconfigurable logic. The instruction level parallelism extraction is done at compile time, thus, this work also presents an optimization phase to the RoSA architecture to be included in the GCC compiler. To design the architecture, this work also presents a methodology based on hardware reuse of datapaths, named RoSE. RoSE aims to visualize the reconfigurable units through reusability levels, which provides area saving and datapath simplification. The architecture presented was implemented in hardware description language (VHDL). It was validated through simulations and prototyping. To characterize performance analysis some benchmarks were used and they demonstrated a speedup of 11x on the execution of some applications

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motion estimation is the main responsible for data reduction in digital video encoding. It is also the most computational damanding step. H.264 is the newest standard for video compression and was planned to double the compression ratio achievied by previous standards. It was developed by the ITU-T Video Coding Experts Group (VCEG) together with the ISO/IEC Moving Picture Experts Group (MPEG) as the product of a partnership effort known as the Joint Video Team (JVT). H.264 presents novelties that improve the motion estimation efficiency, such as the adoption of variable block-size, quarter pixel precision and multiple reference frames. This work defines an architecture for motion estimation in hardware/software, using a full search algorithm, variable block-size and mode decision. This work consider the use of reconfigurable devices, soft-processors and development tools for embedded systems such as Quartus II, SOPC Builder, Nios II and ModelSim

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents the concept, design and implementation of a MP-SoC platform, named STORM (MP-SoC DirecTory-Based PlatfORM). Currently the platform is composed of the following modules: SPARC V8 processor, GPOP processor, Cache module, Memory module, Directory module and two different modles of Network-on-Chip, NoCX4 and Obese Tree. All modules were implemented using SystemC, simulated and validated, individually or in group. The modules description is presented in details. For programming the platform in C it was implemented a SPARC assembler, fully compatible with gcc s generated assembly code. For the parallel programming it was implemented a library for mutex managing, using the due assembler s support. A total of 10 simulations of increasing complexity are presented for the validation of the presented concepts. The simulations include real parallel applications, such as matrix multiplication, Mergesort, KMP, Motion Estimation and DCT 2D

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Through the adoption of the software product line (SPL) approach, several benefits are achieved when compared to the conventional development processes that are based on creating a single software system at a time. The process of developing a SPL differs from traditional software construction, since it has two essential phases: the domain engineering - when common and variables elements of the SPL are defined and implemented; and the application engineering - when one or more applications (specific products) are derived from the reuse of artifacts created in the domain engineering. The test activity is also fundamental and aims to detect defects in the artifacts produced in SPL development. However, the characteristics of an SPL bring new challenges to this activity that must be considered. Several approaches have been recently proposed for the testing process of product lines, but they have been shown limited and have only provided general guidelines. In addition, there is also a lack of tools to support the variability management and customization of automated case tests for SPLs. In this context, this dissertation has the goal of proposing a systematic approach to software product line testing. The approach offers: (i) automated SPL test strategies to be applied in the domain and application engineering, (ii) explicit guidelines to support the implementation and reuse of automated test cases at the unit, integration and system levels in domain and application engineering; and (iii) tooling support for automating the variability management and customization of test cases. The approach is evaluated through its application in a software product line for web systems. The results of this work have shown that the proposed approach can help the developers to deal with the challenges imposed by the characteristics of SPLs during the testing process

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Reconfigurable Computing is an intermediate solution at the resolution of complex problems, making possible to combine the speed of the hardware with the flexibility of the software. An reconfigurable architecture possess some goals, among these the increase of performance. The use of reconfigurable architectures to increase the performance of systems is a well known technology, specially because of the possibility of implementing certain slow algorithms in the current processors directly in hardware. Amongst the various segments that use reconfigurable architectures the reconfigurable processors deserve a special mention. These processors combine the functions of a microprocessor with a reconfigurable logic and can be adapted after the development process. Reconfigurable Instruction Set Processors (RISP) are a subgroup of the reconfigurable processors, that have as goal the reconfiguration of the instruction set of the processor, involving issues such formats, operands and operations of the instructions. This work possess as main objective the development of a RISP processor, combining the techniques of configuration of the set of executed instructions of the processor during the development, and reconfiguration of itself in execution time. The project and implementation in VHDL of this RISP processor has as intention to prove the applicability and the efficiency of two concepts: to use more than one set of fixed instructions, with only one set active in a given time, and the possibility to create and combine new instructions, in a way that the processor pass to recognize and use them in real time as if these existed in the fixed set of instruction. The creation and combination of instructions is made through a reconfiguration unit, incorporated to the processor. This unit allows the user to send custom instructions to the processor, so that later he can use them as if they were fixed instructions of the processor. In this work can also be found simulations of applications involving fixed and custom instructions and results of the comparisons between these applications in relation to the consumption of power and the time of execution, which confirm the attainment of the goals for which the processor was developed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Alongside the advances of technologies, embedded systems are increasingly present in our everyday. Due to increasing demand for functionalities, many tasks are split among processors, requiring more efficient communication architectures, such as networks on chip (NoC). The NoCs are structures that have routers with channel point-to-point interconnect the cores of system on chip (SoC), providing communication. There are several networks on chip in the literature, each with its specific characteristics. Among these, for this work was chosen the Integrated Processing System NoC (IPNoSyS) as a network on chip with different characteristics compared to general NoCs, because their routing components also accumulate processing function, ie, units have functional able to execute instructions. With this new model, packets are processed and routed by the router architecture. This work aims at improving the performance of applications that have repetition, since these applications spend more time in their execution, which occurs through repeated execution of his instructions. Thus, this work proposes to optimize the runtime of these structures by employing a technique of instruction-level parallelism, in order to optimize the resources offered by the architecture. The applications are tested on a dedicated simulator and the results compared with the original version of the architecture, which in turn, implements only packet level parallelism

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computational Intelligence Methods have been expanding to industrial applications motivated by their ability to solve problems in engineering. Therefore, the embedded systems follow the same idea of using computational intelligence tools embedded on machines. There are several works in the area of embedded systems and intelligent systems. However, there are a few papers that have joined both areas. The aim of this study was to implement an adaptive fuzzy neural hardware with online training embedded on Field Programmable Gate Array – FPGA. The system adaptation can occur during the execution of a given application, aiming online performance improvement. The proposed system architecture is modular, allowing different configurations of fuzzy neural network topologies with online training. The proposed system was applied to: mathematical function interpolation, pattern classification and selfcompensation of industrial sensors. The proposed system achieves satisfactory performance in both tasks. The experiments results shows the advantages and disadvantages of online training in hardware when performed in parallel and sequentially ways. The sequentially training method provides economy in FPGA area, however, increases the complexity of architecture actions. The parallel training method achieves high performance and reduced processing time, the pipeline technique is used to increase the proposed architecture performance. The study development was based on available tools for FPGA circuits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The National Reading Incentive Program (PROLER) is a national initiative to promote and encourage reading throughout the country, linked to the National Library Foundation (FBN) of the Ministry of Culture (MinC). This research aims to assess the PROLER’s implementation process in the state of Rio Grande do Norte, from the actions of their local committee. The framework is based on contextualize the policies that seek to encourage the book and reading, as well as the processes of implementation and evaluation of public policies. The research is understood as a qualitative descriptive-exploratory study, comprising a single case study. Makes use of semi-structured interview as a tool for data collection, which was attended by 8 members of Potiguar Committee as respondents. The techniques of bibliographical and documentary analysis were used for the analysis and discussion of data obtained from surveys and documents on PROLER; as to the content of the interviews, the technique used was the analysis of conversations. As for the results, the existence of four barriers to program implementation in the state that are worth mentioning is observed: a) the political-administrative discontinuity; b) the limited resources and few partnerships; c) the management of school libraries and absence from the post of librarian in the state and; d) the absence of a process or assessment tool able to evaluate the results or the impacts of actions taken by the Potiguar Committee. It was noticed that these limiting four come PROLER making the implementation of the Rio Grande do Norte a process that, although complying with the national regulations of teacher training and follow school libraries and their needs, not flawed to develop assessments that can measure program impacts, making the feedback process of ineffective policy. Another observation of this study is seen in the fact that the Committee did not get enough to supply their shares resources, as well as not being able to articulate new partnerships, thus contributing negatively to the scope of the program form and, consequently, for the effectiveness of their actions. Has even mentioning the fact that the Committee do nothing regarding the mismanagement of the libraries that are in their care, ie, do not use the power of coercion as guaranteed by the National Policy on Reading and federal laws that treat the school library, and is therefore ineffective in relation to compliance with the program guidelines

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reverberation is caused by the reflection of the sound in adjacent surfaces close to the sound source during its propagation to the listener. The impulsive response of an environment represents its reverberation characteristics. Being dependent on the environment, reverberation takes to the listener characteristics of the space where the sound is originated and its absence does not commonly sounds like “natural”. When recording sounds, it is not always possible to have the desirable characteristics of reverberation of an environment, therefore methods for artificial reverberation have been developed, always seeking a more efficient implementations and more faithful to the real environments. This work presents an implementation in FPGAs (Field Programmable Gate Arrays ) of a classic digital reverberation audio structure, based on a proposal of Manfred Schroeder, using sets of all-pass and comb filters. The developed system exploits the use of reconfigurable hardware as a platform development and implementation of digital audio effects, focusing on the modularity and reuse characteristics