5 resultados para TECNICAS DE EVALUACION
em Repositório Institucional da Universidade Tecnológica Federal do Paraná (RIUT)
Resumo:
The intensive character in knowledge of software production and its rising demand suggest the need to establish mechanisms to properly manage the knowledge involved in order to meet the requirements of deadline, costs and quality. The knowledge capitalization is a process that involves from identification to evaluation of the knowledge produced and used. Specifically, for software development, capitalization enables easier access, minimize the loss of knowledge, reducing the learning curve, avoid repeating errors and rework. Thus, this thesis presents the know-Cap, a method developed to organize and guide the capitalization of knowledge in software development. The Know-Cap facilitates the location, preservation, value addition and updating of knowledge, in order to use it in the execution of new tasks. The method was proposed from a set of methodological procedures: literature review, systematic review and analysis of related work. The feasibility and appropriateness of Know-Cap were analyzed from an application study, conducted in a real case, and an analytical study of software development companies. The results obtained indicate the Know- Cap supports the capitalization of knowledge in software development.
Resumo:
Humans have a high ability to extract visual data information acquired by sight. Trought a learning process, which starts at birth and continues throughout life, image interpretation becomes almost instinctively. At a glance, one can easily describe a scene with reasonable precision, naming its main components. Usually, this is done by extracting low-level features such as edges, shapes and textures, and associanting them to high level meanings. In this way, a semantic description of the scene is done. An example of this, is the human capacity to recognize and describe other people physical and behavioral characteristics, or biometrics. Soft-biometrics also represents inherent characteristics of human body and behaviour, but do not allow unique person identification. Computer vision area aims to develop methods capable of performing visual interpretation with performance similar to humans. This thesis aims to propose computer vison methods which allows high level information extraction from images in the form of soft biometrics. This problem is approached in two ways, unsupervised and supervised learning methods. The first seeks to group images via an automatic feature extraction learning , using both convolution techniques, evolutionary computing and clustering. In this approach employed images contains faces and people. Second approach employs convolutional neural networks, which have the ability to operate on raw images, learning both feature extraction and classification processes. Here, images are classified according to gender and clothes, divided into upper and lower parts of human body. First approach, when tested with different image datasets obtained an accuracy of approximately 80% for faces and non-faces and 70% for people and non-person. The second tested using images and videos, obtained an accuracy of about 70% for gender, 80% to the upper clothes and 90% to lower clothes. The results of these case studies, show that proposed methods are promising, allowing the realization of automatic high level information image annotation. This opens possibilities for development of applications in diverse areas such as content-based image and video search and automatica video survaillance, reducing human effort in the task of manual annotation and monitoring.
Resumo:
The textile industry generates a large volume of high organic effluent loading whoseintense color arises from residual dyes. Due to the environmental implications caused by this category of contaminant there is a permanent search for methods to remove these compounds from industrial waste waters. The adsorption alternative is one of the most efficient ways for such a purpose of sequestering/remediation and the use of inexpensive materials such as agricultural residues (e.g., sugarcane bagasse) and cotton dust waste (CDW) from weaving in their natural or chemically modified forms. The inclusion of quaternary amino groups (DEAE+) and methylcarboxylic (CM-) in the CDW cellulosic structure generates an ion exchange capacity in these formerly inert matrix and, consequently, consolidates its ability for electrovalent adsorption of residual textile dyes. The obtained ionic matrices were evaluated for pHpcz, the retention efficiency for various textile dyes in different experimental conditions, such as initial concentration , temperature, contact time in order to determine the kinetic and thermodynamic parameters of adsorption in batch, turning comprehensive how does occur the process, then understood from the respective isotherms. It was observed a change in the pHpcz for CM--CDW (6.07) and DEAE+-CDW (9.66) as compared to the native CDW (6.46), confirming changes in the total surface charge. The ionized matrices were effective for removing all evaluated pure or residual textile dyes under various tested experimental conditions. The kinetics of the adsorption process data had best fitted to the model a pseudosecond order and an intraparticle diffusion model suggested that the process takes place in more than one step. The time required for the system to reach equilibrium varied according to the initial concentration of dye, being faster in diluted solutions. The isotherm model of Langmuir was the best fit to the experimental data. The maximum adsorption capacity varied differently for each tested dye and it is closely related to the interaction adsorbent/adsorbate and dye chemical structure. Few dyes obtained a linear variation of the balance ka constant due to the inversion of temperature and might have influence form their thermodynamic behavior. Dyes that could be evaluated such as BR 18: 1 and AzL, showed features of an endothermic adsorption process (ΔH° positive) and the dye VmL presented exothermic process characteristics (ΔH° negative). ΔG° values suggested that adsorption occurred spontaneously, except for the BY 28 dye, and the values of ΔH° indicated that adsorption occurred by a chemisorption process. The reduction of 31 to 51% in the biodegradability of the matrix after the dye adsorption means that they must go through a cleaning process before being discarded or recycled, and the regeneration test indicates that matrices can be reused up to five times without loss of performance. The DEAE+-CDW matrix was efficient for the removal of color from a real textile effluent reaching an UV-Visible spectral area decrease of 93% when applied in a proportion of 15 g ion exchanger matrix L-1 of colored wastewater, even in the case of the parallel presence of 50 g L-1 of mordant salts in the waste water. The wide range of colored matter removal by the synthesized matrices varied from 40.27 to 98.65 mg g-1 of ionized matrix, obviously depending in each particular chemical structure of the dye upon adsorption.
Resumo:
Requirements specification has long been recognized as critical activity in software development processes because of its impact on project risks when poorly performed. A large amount of studies addresses theoretical aspects, propositions of techniques, and recommended practices for Requirements Engineering (RE). To be successful, RE have to ensure that the specified requirements are complete and correct what means that all intents of the stakeholders in a given business context are covered by the requirements and that no unnecessary requirement was introduced. However, the accurate capture the business intents of the stakeholders remains a challenge and it is a major factor of software project failures. This master’s dissertation presents a novel method referred to as “Problem-Based SRS” aiming at improving the quality of the Software Requirements Specification (SRS) in the sense that the stated requirements provide suitable answers to real customer ́s businesses issues. In this approach, the knowledge about the software requirements is constructed from the knowledge about the customer ́s problems. Problem-Based SRS consists in an organization of activities and outcome objects through a process that contains five main steps. It aims at supporting the software requirements engineering team to systematically analyze the business context and specify the software requirements, taking also into account a first glance and vision of the software. The quality aspects of the specifications are evaluated using traceability techniques and axiomatic design principles. The cases studies conducted and presented in this document point out that the proposed method can contribute significantly to improve the software requirements specification.
Resumo:
This document presents GEmSysC, an unified cryptographic API for embedded systems. Software layers implementing this API can be built over existing libraries, allowing embedded software to access cryptographic functions in a consistent way that does not depend on the underlying library. The API complies to good practices for API design and good practices for embedded software development and took its inspiration from other cryptographic libraries and standards. The main inspiration for creating GEmSysC was the CMSIS-RTOS standard, which defines an unified API for embedded software in an implementation-independent way, but targets operating systems instead of cryptographic functions. GEmSysC is made of a generic core and attachable modules, one for each cryptographic algorithm. This document contains the specification of the core of GEmSysC and three of its modules: AES, RSA and SHA-256. GEmSysC was built targeting embedded systems, but this does not restrict its use only in such systems – after all, embedded systems are just very limited computing devices. As a proof of concept, two implementations of GEmSysC were made. One of them was built over wolfSSL, which is an open source library for embedded systems. The other was built over OpenSSL, which is open source and a de facto standard. Unlike wolfSSL, OpenSSL does not specifically target embedded systems. The implementation built over wolfSSL was evaluated in a Cortex- M3 processor with no operating system while the implementation built over OpenSSL was evaluated on a personal computer with Windows 10 operating system. This document displays test results showing GEmSysC to be simpler than other libraries in some aspects. These results have shown that both implementations incur in little overhead in computation time compared to the cryptographic libraries themselves. The overhead of the implementation has been measured for each cryptographic algorithm and is between around 0% and 0.17% for the implementation over wolfSSL and between 0.03% and 1.40% for the one over OpenSSL. This document also presents the memory costs for each implementation.