997 resultados para implementação
Resumo:
This study aims to analyzing the implementation of the Matrix Support proposal with professionals of Substitutive Services in Mental Health in the city of Natal/RN. The Matrix Support (MS) is an institutional arrangement which has been recently adopted by the Health Ministry, as an administrative strategy, for the construction of a wide care net in Mental Health, deviating the logic of indiscriminate follow-through changed by one of co-responsibility. In addition to this, its goal is to promote a major resolution as regards health assistance. Integral attention, as it is intended by the unique health system, may be reached by means of knowledge and practices interchange, establishing an interdisciplinary work logic, through an interconnected net of health services. For the accomplishment of this study, individual interviews of semi-structured character were used as instrument, with the coordinators and technical staff of the CAPs. The data collection was done in the following services: CAPS II ( East and West) and CAPS ad ( North and East), in the city of Natal/RN. The results point out that the CAPs to initiate of the discussion the process in the implementation of the MS aiming, to promote the reorganization and redefinition of the flow in the net, thus not acting in a fragmented way. Nevertheless, there is no effective articulation concerning the basic attention services, there is a major focus of the attention in mental health on the specialized services, little insertion in the territory and in the everyday life of the community
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Nowadays, the importance of using software processes is already consolidated and is considered fundamental to the success of software development projects. Large and medium software projects demand the definition and continuous improvement of software processes in order to promote the productive development of high-quality software. Customizing and evolving existing software processes to address the variety of scenarios, technologies, culture and scale is a recurrent challenge required by the software industry. It involves the adaptation of software process models for the reality of their projects. Besides, it must also promote the reuse of past experiences in the definition and development of software processes for the new projects. The adequate management and execution of software processes can bring a better quality and productivity to the produced software systems. This work aimed to explore the use and adaptation of consolidated software product lines techniques to promote the management of the variabilities of software process families. In order to achieve this aim: (i) a systematic literature review is conducted to identify and characterize variability management approaches for software processes; (ii) an annotative approach for the variability management of software process lines is proposed and developed; and finally (iii) empirical studies and a controlled experiment assess and compare the proposed annotative approach against a compositional one. One study a comparative qualitative study analyzed the annotative and compositional approaches from different perspectives, such as: modularity, traceability, error detection, granularity, uniformity, adoption, and systematic variability management. Another study a comparative quantitative study has considered internal attributes of the specification of software process lines, such as modularity, size and complexity. Finally, the last study a controlled experiment evaluated the effort to use and the understandability of the investigated approaches when modeling and evolving specifications of software process lines. The studies bring evidences of several benefits of the annotative approach, and the potential of integration with the compositional approach, to assist the variability management of software process lines
Resumo:
Formal methods should be used to specify and verify on-card software in Java Card applications. Furthermore, Java Card programming style requires runtime verification of all input conditions for all on-card methods, where the main goal is to preserve the data in the card. Design by contract, and in particular, the JML language, are an option for this kind of development and verification, as runtime verification is part of the Design by contract method implemented by JML. However, JML and its currently available tools for runtime verification were not designed with Java Card limitations in mind and are not Java Card compliant. In this thesis, we analyze how much of this situation is really intrinsic of Java Card limitations and how much is just a matter of a complete re-design of JML and its tools. We propose the requirements for a new language which is Java Card compliant and indicate the lines on which a compiler for this language should be built. JCML strips from JML non-Java Card aspects such as concurrency and unsupported types. This would not be enough, however, without a great effort in optimization of the verification code generated by its compiler, as this verification code must run on the card. The JCML compiler, although being much more restricted than the one for JML, is able to generate Java Card compliant verification code for some lightweight specifications. As conclusion, we present a Java Card compliant variant of JML, JCML (Java Card Modeling Language), with a preliminary version of its compiler
Resumo:
Programs manipulate information. However, information is abstract in nature and needs to be represented, usually by data structures, making it possible to be manipulated. This work presents the AGraphs, a representation and exchange format of the data that uses typed directed graphs with a simulation of hyperedges and hierarchical graphs. Associated to the AGraphs format there is a manipulation library with a simple programming interface, tailored to the language being represented. The AGraphs format in ad-hoc manner was used as representation format in tools developed at UFRN, and, to make it more usable in other tools, an accurate description and the development of support tools was necessary. These accurate description and tools have been developed and are described in this work. This work compares the AGraphs format with other representation and exchange formats (e.g ATerms, GDL, GraphML, GraX, GXL and XML). The main objective this comparison is to capture important characteristics and where the AGraphs concepts can still evolve
Resumo:
The increase of applications complexity has demanded hardware even more flexible and able to achieve higher performance. Traditional hardware solutions have not been successful in providing these applications constraints. General purpose processors have inherent flexibility, since they perform several tasks, however, they can not reach high performance when compared to application-specific devices. Moreover, since application-specific devices perform only few tasks, they achieve high performance, although they have less flexibility. Reconfigurable architectures emerged as an alternative to traditional approaches and have become an area of rising interest over the last decades. The purpose of this new paradigm is to modify the device s behavior according to the application. Thus, it is possible to balance flexibility and performance and also to attend the applications constraints. This work presents the design and implementation of a coarse grained hybrid reconfigurable architecture to stream-based applications. The architecture, named RoSA, consists of a reconfigurable logic attached to a processor. Its goal is to exploit the instruction level parallelism from intensive data-flow applications to accelerate the application s execution on the reconfigurable logic. The instruction level parallelism extraction is done at compile time, thus, this work also presents an optimization phase to the RoSA architecture to be included in the GCC compiler. To design the architecture, this work also presents a methodology based on hardware reuse of datapaths, named RoSE. RoSE aims to visualize the reconfigurable units through reusability levels, which provides area saving and datapath simplification. The architecture presented was implemented in hardware description language (VHDL). It was validated through simulations and prototyping. To characterize performance analysis some benchmarks were used and they demonstrated a speedup of 11x on the execution of some applications
Resumo:
Motion estimation is the main responsible for data reduction in digital video encoding. It is also the most computational damanding step. H.264 is the newest standard for video compression and was planned to double the compression ratio achievied by previous standards. It was developed by the ITU-T Video Coding Experts Group (VCEG) together with the ISO/IEC Moving Picture Experts Group (MPEG) as the product of a partnership effort known as the Joint Video Team (JVT). H.264 presents novelties that improve the motion estimation efficiency, such as the adoption of variable block-size, quarter pixel precision and multiple reference frames. This work defines an architecture for motion estimation in hardware/software, using a full search algorithm, variable block-size and mode decision. This work consider the use of reconfigurable devices, soft-processors and development tools for embedded systems such as Quartus II, SOPC Builder, Nios II and ModelSim
Resumo:
This work presents the concept, design and implementation of a MP-SoC platform, named STORM (MP-SoC DirecTory-Based PlatfORM). Currently the platform is composed of the following modules: SPARC V8 processor, GPOP processor, Cache module, Memory module, Directory module and two different modles of Network-on-Chip, NoCX4 and Obese Tree. All modules were implemented using SystemC, simulated and validated, individually or in group. The modules description is presented in details. For programming the platform in C it was implemented a SPARC assembler, fully compatible with gcc s generated assembly code. For the parallel programming it was implemented a library for mutex managing, using the due assembler s support. A total of 10 simulations of increasing complexity are presented for the validation of the presented concepts. The simulations include real parallel applications, such as matrix multiplication, Mergesort, KMP, Motion Estimation and DCT 2D
Resumo:
Through the adoption of the software product line (SPL) approach, several benefits are achieved when compared to the conventional development processes that are based on creating a single software system at a time. The process of developing a SPL differs from traditional software construction, since it has two essential phases: the domain engineering - when common and variables elements of the SPL are defined and implemented; and the application engineering - when one or more applications (specific products) are derived from the reuse of artifacts created in the domain engineering. The test activity is also fundamental and aims to detect defects in the artifacts produced in SPL development. However, the characteristics of an SPL bring new challenges to this activity that must be considered. Several approaches have been recently proposed for the testing process of product lines, but they have been shown limited and have only provided general guidelines. In addition, there is also a lack of tools to support the variability management and customization of automated case tests for SPLs. In this context, this dissertation has the goal of proposing a systematic approach to software product line testing. The approach offers: (i) automated SPL test strategies to be applied in the domain and application engineering, (ii) explicit guidelines to support the implementation and reuse of automated test cases at the unit, integration and system levels in domain and application engineering; and (iii) tooling support for automating the variability management and customization of test cases. The approach is evaluated through its application in a software product line for web systems. The results of this work have shown that the proposed approach can help the developers to deal with the challenges imposed by the characteristics of SPLs during the testing process
Resumo:
The Reconfigurable Computing is an intermediate solution at the resolution of complex problems, making possible to combine the speed of the hardware with the flexibility of the software. An reconfigurable architecture possess some goals, among these the increase of performance. The use of reconfigurable architectures to increase the performance of systems is a well known technology, specially because of the possibility of implementing certain slow algorithms in the current processors directly in hardware. Amongst the various segments that use reconfigurable architectures the reconfigurable processors deserve a special mention. These processors combine the functions of a microprocessor with a reconfigurable logic and can be adapted after the development process. Reconfigurable Instruction Set Processors (RISP) are a subgroup of the reconfigurable processors, that have as goal the reconfiguration of the instruction set of the processor, involving issues such formats, operands and operations of the instructions. This work possess as main objective the development of a RISP processor, combining the techniques of configuration of the set of executed instructions of the processor during the development, and reconfiguration of itself in execution time. The project and implementation in VHDL of this RISP processor has as intention to prove the applicability and the efficiency of two concepts: to use more than one set of fixed instructions, with only one set active in a given time, and the possibility to create and combine new instructions, in a way that the processor pass to recognize and use them in real time as if these existed in the fixed set of instruction. The creation and combination of instructions is made through a reconfiguration unit, incorporated to the processor. This unit allows the user to send custom instructions to the processor, so that later he can use them as if they were fixed instructions of the processor. In this work can also be found simulations of applications involving fixed and custom instructions and results of the comparisons between these applications in relation to the consumption of power and the time of execution, which confirm the attainment of the goals for which the processor was developed
Resumo:
Alongside the advances of technologies, embedded systems are increasingly present in our everyday. Due to increasing demand for functionalities, many tasks are split among processors, requiring more efficient communication architectures, such as networks on chip (NoC). The NoCs are structures that have routers with channel point-to-point interconnect the cores of system on chip (SoC), providing communication. There are several networks on chip in the literature, each with its specific characteristics. Among these, for this work was chosen the Integrated Processing System NoC (IPNoSyS) as a network on chip with different characteristics compared to general NoCs, because their routing components also accumulate processing function, ie, units have functional able to execute instructions. With this new model, packets are processed and routed by the router architecture. This work aims at improving the performance of applications that have repetition, since these applications spend more time in their execution, which occurs through repeated execution of his instructions. Thus, this work proposes to optimize the runtime of these structures by employing a technique of instruction-level parallelism, in order to optimize the resources offered by the architecture. The applications are tested on a dedicated simulator and the results compared with the original version of the architecture, which in turn, implements only packet level parallelism
Resumo:
OBJETIVO: Comparar a mortalidade em 30 dias com a utilização de determinados grupos de medicamentos por pacientes, entre 1992-1997, quando não se dispunham de condutas consensuais para tratamento do infarto agudo do miocárdio, e de 2000-2002, após a padronização dessas condutas em nosso serviço. MÉTODOS: Avaliados, retrospectivamente, no 1º e 2º períodos, 172 e 143 pacientes respectivamente, admitidos com diagnóstico de infarto agudo do miocárdio: foram realizados os testes estatísticos: c² para comparar proporções, teste t de Student e o de Mann-Whitney para comparação de médias ou medianas. RESULTADOS: A análise não mostrou diferença em relação aos homens, brancos e a idade média de 61 anos, nos dois períodos. Com relação aos fatores de risco clássicos, foi observada diferença apenas na incidência de dislipidemia (17 e 29%) e, quanto à estratégia terapêutica, aumento significativo do uso de: trombolíticos (39 e 61,5%), ácido acetilsalicílico (70,9 e 96,5%), betabloqueadores (34,8 e 67,8%), inibidor da enzima conversora da angiotensina (45,9 e 74,8%), nitratos (61 e 85,3%) e a redução significativa de bloqueadores de cálcio (16,8 e 5,3%), antiarrítmicos (29,1 e 9,7%) e diuréticos (50,6 e 26,6%). O uso de inotrópicos não diferiu entre os períodos (29,6 e 32,1%). A mortalidade em 30 dias apresentou redução estatisticamente significante de 22,7 para 10,5%. CONCLUSÃO: A implementação das condutas consensuais para o tratamento do infarto agudo do miocárdio foi acompanhada por significante redução da taxa de mortalidade em 30 dias.
Resumo:
Parte-se do interesse dispensado contemporaneamente às articulações entre saúde mental e atenção básica. Após uma breve síntese histórica e conceitual neste campo, discutem-se aspectos operativos da desinstitucionalização dos cuidados a pessoas com transtornos mentais na atenção básica. Com a análise de alguns estudos e experiências são destacados, a seguir, componentes fundamentais para avançar neste sentido: (1) desenvolver processos de comunicação que visem ampliar a legibilidade profissional, (2) superar a centralização em ações restritas aos enquadres tradicionais, (3) manter questionamento permanente com relação ao risco de psiquiatrização do cuidado em saúde mental, (4) superar concepções culpabilizantes do grupo familiar, e (5) investir na formação das equipes de atenção básica para as múltiplas dimensões do cuidado em saúde mental. Apontam-se, desta forma, alguns caminhos e direções possíveis para o desenho de ações de saúde mental na atenção básica que tenham, no horizonte, a perspectiva antimanicomial.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Os bancos de sangue de cordão umbilical e placentário foram criados a partir da comprovação de que o sangue de cordão umbilical e placentário (SCUP) é uma fonte rica em células progenitoras hematopoéticas (CPH) e alternativa às células provenientes da medula óssea para transplante, fato que gerou o interesse pelo armazenamento das células nele contidas. A legislação brasileira distingue bancos para uso alogênico não aparentado (públicos) e para uso exclusivamente autólogo (privados). Por sua vez, o armazenamento de SCUP para uso familiar (doação dirigida) pode ser realizado em bancos de sangue de cordão umbilical e placentário públicos, serviços de hemoterapia ou centros de transplante, quando há um membro da família do nascituro com doença diagnosticada e que necessite de transplante de CPH como tratamento. Apesar de a legislação ser clara, a Anvisa tem identificado o interesse sobre a possibilidade da liberação de unidades de SCUP, armazenadas em bancos autólogos, para a utilização de outrem, familiar, além do recém-nascido beneficiário. O objetivo do trabalho visa promover a reflexão sobre uma possível modificação dos parâmetros legais nacionais que regem os bancos de SCUP autólogo, tornando-os bancos com vistas ao uso familiar, por meio da exposição dos principais elementos relacionados ao tema. O estudo analisou os critérios técnico-sanitários legais para regulamentação dos bancos; descreveu as características das CPH de diversas fontes e tipos de doação para transplante; contextualizou a relação com os princípios da Bioética; avanços sobre terapia e pesquisas relativas às CPH; e discutiu possíveis riscos envolvidos no processo.