13 resultados para CNPQ::CIENCIAS EXATAS E DA TERRA::CIENCIA DA COMPUTACAO::METODOLOGIA E TECNICAS DA COMPUTACAO::ENGENHARIA DE SOFTWARE

em Repositório Institucional da Universidade Tecnológica Federal do Paraná (RIUT)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a study about a the Baars-Franklin architecture, which defines a model of computational consciousness, and use it in a mobile robot navigation task. The insertion of mobile robots in dynamic environments carries a high complexity in navigation tasks, in order to deal with the constant environment changes, it is essential that the robot can adapt to this dynamism. The approach utilized in this work is to make the execution of these tasks closer to how human beings react to the same conditions by means of a model of computational consci-ousness. The LIDA architecture (Learning Intelligent Distribution Agent) is a cognitive system that seeks tomodel some of the human cognitive aspects, from low-level perceptions to decision making, as well as attention mechanism and episodic memory. In the present work, a computa-tional implementation of the LIDA architecture was evaluated by means of a case study, aiming to evaluate the capabilities of a cognitive approach to navigation of a mobile robot in dynamic and unknown environments, using experiments both with virtual environments (simulation) and a real robot in a realistic environment. This study concluded that it is possible to obtain benefits by using conscious cognitive models in mobile robot navigation tasks, presenting the positive and negative aspects of this approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The large number of opinions generated by online users made the former “word of mouth” find its way to virtual world. In addition to be numerous, many of the useful reviews are mixed with a large number of fraudulent, incomplete or duplicate reviews. However, how to find the features that influence on the number of votes received by an opinion and find useful reviews? The literature on opinion mining has several studies and techniques that are able to analyze of properties found in the text of reviews. This paper presents the application of a methodology for evaluation of usefulness of opinions with the aim of identifying which characteristics have more influence on the amount of votes: basic utility (e.g. ratings about the product and/or service, date of publication), textual (e.g.size of words, paragraphs) and semantics (e.g., the meaning of the words of the text). The evaluation was performed in a database extracted from TripAdvisor with opinionsabout hotels written in Portuguese. Results show that users give more attention to recent opinions with higher scores for value and location of the hotel and with lowest scores for sleep quality and service and cleanliness. Texts with positive opinions, small words, few adjectives and adverbs increase the chances of receiving more votes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The growing demand for large-scale virtualization environments, such as the ones used in cloud computing, has led to a need for efficient management of computing resources. RAM memory is the one of the most required resources in these environments, and is usually the main factor limiting the number of virtual machines that can run on the physical host. Recently, hypervisors have brought mechanisms for transparent memory sharing between virtual machines in order to reduce the total demand for system memory. These mechanisms “merge” similar pages detected in multiple virtual machines into the same physical memory, using a copy-on-write mechanism in a manner that is transparent to the guest systems. The objective of this study is to present an overview of these mechanisms and also evaluate their performance and effectiveness. The results of two popular hypervisors (VMware and KVM) using different guest operating systems (Linux and Windows) and different workloads (synthetic and real) are presented herein. The results show significant performance differences between hypervisors according to the guest system workloads and execution time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract – Background – The software effort estimation research area aims to improve the accuracy of this estimation in software projects and activities. Aims – This study describes the development and usage of a web application tocollect data generated from the Planning Poker estimation process and the analysis of the collected data to investigate the impact of revising previous estimates when conducting similar estimates in a Planning Poker context. Method – Software activities were estimated by Universidade Tecnológica Federal do Paraná (UTFPR) computer students, using Planning Poker, with and without revising previous similar activities, storing data regarding the decision-making process. And the collected data was used to investigate the impact that revising similar executed activities have in the software effort estimates' accuracy.Obtained Results – The UTFPR computer students were divided into 14 groups. Eight of them showed accuracy increase in more than half of their estimates. Three of them had almost the same accuracy in more than half of their estimates. And only three of them had loss of accuracy in more than half of their estimates. Conclusion – Reviewing the similar executed software activities, when using Planning Poker, led to more accurate software estimates in most cases, and, because of that, can improve the software development process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research deals with the use of a participatory design methodology to develop a repository of open educational resources, the Arcaz. Discusses key aspects of neutrality and determinism of technology within the context of Social Studies of Science and Technology and presents some concepts of critical theory of technology related to the democratic construction of technological artifacts. Discusses the philosophical heritage of the movements that led to the emergence of free software, open education and open educational resources and argues that participatory design share similar ideals. It presents concepts of human-computer interaction, interaction design and user centered design, important to enhance the user experience in information systems. It addresses the participatory design as a methodology that allows the democratic participation of users in the technological construction, promoting mutual learning and active voice for the participants. Develops a participatory design methodology adapted to the Arcaz context of use and provides the procedures for the meetings conducted to apply participatory design techniques to the repository and the results obtained. It concludes with a study of some of the interventions suggested in the system and orientations for future applications of participatory practices in the development of the repository and a list of best practices, focusing on ethical principles that should guide the participatory design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The intensive character in knowledge of software production and its rising demand suggest the need to establish mechanisms to properly manage the knowledge involved in order to meet the requirements of deadline, costs and quality. The knowledge capitalization is a process that involves from identification to evaluation of the knowledge produced and used. Specifically, for software development, capitalization enables easier access, minimize the loss of knowledge, reducing the learning curve, avoid repeating errors and rework. Thus, this thesis presents the know-Cap, a method developed to organize and guide the capitalization of knowledge in software development. The Know-Cap facilitates the location, preservation, value addition and updating of knowledge, in order to use it in the execution of new tasks. The method was proposed from a set of methodological procedures: literature review, systematic review and analysis of related work. The feasibility and appropriateness of Know-Cap were analyzed from an application study, conducted in a real case, and an analytical study of software development companies. The results obtained indicate the Know- Cap supports the capitalization of knowledge in software development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Humans have a high ability to extract visual data information acquired by sight. Trought a learning process, which starts at birth and continues throughout life, image interpretation becomes almost instinctively. At a glance, one can easily describe a scene with reasonable precision, naming its main components. Usually, this is done by extracting low-level features such as edges, shapes and textures, and associanting them to high level meanings. In this way, a semantic description of the scene is done. An example of this, is the human capacity to recognize and describe other people physical and behavioral characteristics, or biometrics. Soft-biometrics also represents inherent characteristics of human body and behaviour, but do not allow unique person identification. Computer vision area aims to develop methods capable of performing visual interpretation with performance similar to humans. This thesis aims to propose computer vison methods which allows high level information extraction from images in the form of soft biometrics. This problem is approached in two ways, unsupervised and supervised learning methods. The first seeks to group images via an automatic feature extraction learning , using both convolution techniques, evolutionary computing and clustering. In this approach employed images contains faces and people. Second approach employs convolutional neural networks, which have the ability to operate on raw images, learning both feature extraction and classification processes. Here, images are classified according to gender and clothes, divided into upper and lower parts of human body. First approach, when tested with different image datasets obtained an accuracy of approximately 80% for faces and non-faces and 70% for people and non-person. The second tested using images and videos, obtained an accuracy of about 70% for gender, 80% to the upper clothes and 90% to lower clothes. The results of these case studies, show that proposed methods are promising, allowing the realization of automatic high level information image annotation. This opens possibilities for development of applications in diverse areas such as content-based image and video search and automatica video survaillance, reducing human effort in the task of manual annotation and monitoring.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this research work, a new routing protocol for Opportunistic Networks is presented. The proposed protocol is called PSONET (PSO for Opportunistic Networks) since the proposal uses a hybrid system composed of a Particle Swarm Optimization algorithm (PSO). The main motivation for using the PSO is to take advantage of its search based on individuals and their learning adaptation. The PSONET uses the Particle Swarm Optimization technique to drive the network traffic through of a good subset of forwarders messages. The PSONET analyzes network communication conditions, detecting whether each node has sparse or dense connections and thus make better decisions about routing messages. The PSONET protocol is compared with the Epidemic and PROPHET protocols in three different scenarios of mobility: a mobility model based in activities, which simulates the everyday life of people in their work activities, leisure and rest; a mobility model based on a community of people, which simulates a group of people in their communities, which eventually will contact other people who may or may not be part of your community, to exchange information; and a random mobility pattern, which simulates a scenario divided into communities where people choose a destination at random, and based on the restriction map, move to this destination using the shortest path. The simulation results, obtained through The ONE simulator, show that in scenarios where the mobility model based on a community of people and also where the mobility model is random, the PSONET protocol achieves a higher messages delivery rate and a lower replication messages compared with the Epidemic and PROPHET protocols.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this work is to demonstrate and to assess a simple algorithm for automatic estimation of the most salient region in an image, that have possible application in computer vision. The algorithm uses the connection between color dissimilarities in the image and the image’s most salient region. The algorithm also avoids using image priors. Pixel dissimilarity is an informal function of the distance of a specific pixel’s color to other pixels’ colors in an image. We examine the relation between pixel color dissimilarity and salient region detection on the MSRA1K image dataset. We propose a simple algorithm for salient region detection through random pixel color dissimilarity. We define dissimilarity by accumulating the distance between each pixel and a sample of n other random pixels, in the CIELAB color space. An important result is that random dissimilarity between each pixel and just another pixel (n = 1) is enough to create adequate saliency maps when combined with median filter, with competitive average performance if compared with other related methods in the saliency detection research field. The assessment was performed by means of precision-recall curves. This idea is inspired on the human attention mechanism that is able to choose few specific regions to focus on, a biological system that the computer vision community aims to emulate. We also review some of the history on this topic of selective attention.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Requirements specification has long been recognized as critical activity in software development processes because of its impact on project risks when poorly performed. A large amount of studies addresses theoretical aspects, propositions of techniques, and recommended practices for Requirements Engineering (RE). To be successful, RE have to ensure that the specified requirements are complete and correct what means that all intents of the stakeholders in a given business context are covered by the requirements and that no unnecessary requirement was introduced. However, the accurate capture the business intents of the stakeholders remains a challenge and it is a major factor of software project failures. This master’s dissertation presents a novel method referred to as “Problem-Based SRS” aiming at improving the quality of the Software Requirements Specification (SRS) in the sense that the stated requirements provide suitable answers to real customer ́s businesses issues. In this approach, the knowledge about the software requirements is constructed from the knowledge about the customer ́s problems. Problem-Based SRS consists in an organization of activities and outcome objects through a process that contains five main steps. It aims at supporting the software requirements engineering team to systematically analyze the business context and specify the software requirements, taking also into account a first glance and vision of the software. The quality aspects of the specifications are evaluated using traceability techniques and axiomatic design principles. The cases studies conducted and presented in this document point out that the proposed method can contribute significantly to improve the software requirements specification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Social networks rely on concepts such as collaboration, cooperation, replication, flow, speed, interaction, engagement, and aim the continuous sharing and resharing of information in support of the permanent social interaction. Facebook, the largest social network in the world, reached, in May 2016, the mark of 1.09 billion active users daily, draining 161.7 million hours of users’ attention to the website every day. These users share 4.75 billion units of content daily. The research presented in this dissertation aims to investigate the management of knowledge and collective intelligence, from the introduction of mechanisms that aim to enable users to manage and organize current information in the feeds from Facebook groups in which they participate, turning Facebook into a collective knowledge and information management device that goes far beyond mere interaction and communication among people. The adoption of Design Science Research methodology is intended to instill the "genes" of collective intelligence, as presented in the literature, in the computational artifact being developed, so that intelligence can be managed and used to create even more knowledge and intelligence to and by the group. The main theoretical contribution of this dissertation is to discuss knowledge management and collective intelligence in a complementary and integrated manner, showing how efforts to obtain one also contribute to leveraging the other.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This document presents GEmSysC, an unified cryptographic API for embedded systems. Software layers implementing this API can be built over existing libraries, allowing embedded software to access cryptographic functions in a consistent way that does not depend on the underlying library. The API complies to good practices for API design and good practices for embedded software development and took its inspiration from other cryptographic libraries and standards. The main inspiration for creating GEmSysC was the CMSIS-RTOS standard, which defines an unified API for embedded software in an implementation-independent way, but targets operating systems instead of cryptographic functions. GEmSysC is made of a generic core and attachable modules, one for each cryptographic algorithm. This document contains the specification of the core of GEmSysC and three of its modules: AES, RSA and SHA-256. GEmSysC was built targeting embedded systems, but this does not restrict its use only in such systems – after all, embedded systems are just very limited computing devices. As a proof of concept, two implementations of GEmSysC were made. One of them was built over wolfSSL, which is an open source library for embedded systems. The other was built over OpenSSL, which is open source and a de facto standard. Unlike wolfSSL, OpenSSL does not specifically target embedded systems. The implementation built over wolfSSL was evaluated in a Cortex- M3 processor with no operating system while the implementation built over OpenSSL was evaluated on a personal computer with Windows 10 operating system. This document displays test results showing GEmSysC to be simpler than other libraries in some aspects. These results have shown that both implementations incur in little overhead in computation time compared to the cryptographic libraries themselves. The overhead of the implementation has been measured for each cryptographic algorithm and is between around 0% and 0.17% for the implementation over wolfSSL and between 0.03% and 1.40% for the one over OpenSSL. This document also presents the memory costs for each implementation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work proposes to adjust the Notification Oriented Paradigm (NOP) so that it provides support to fuzzy concepts. NOP is inspired by elements of imperative and declarative paradigms, seeking to solve some of the drawbacks of both. By decomposing an application into a network of smaller computational entities that are executed only when necessary, NOP eliminates the need to perform unnecessary computations and helps to achieve better logical-causal uncoupling, facilitating code reuse and application distribution over multiple processors or machines. In addition, NOP allows to express the logical-causal knowledge at a high level of abstraction, through rules in IF-THEN format. Fuzzy systems, in turn, perform logical inferences on causal knowledge bases (IF-THEN rules) that can deal with problems involving uncertainty. Since PON uses IF-THEN rules in an alternative way, reducing redundant evaluations and providing better decoupling, this research has been carried out to identify, propose and evaluate the necessary changes to be made on NOP allowing to be used in the development of fuzzy systems. After that, two fully usable materializations were created: a C++ framework, and a complete programming language (LingPONFuzzy) that provide support to fuzzy inference systems. From there study cases have been created and several tests cases were conducted, in order to validate the proposed solution. The test results have shown a significant reduction in the number of rules evaluated in comparison to a fuzzy system developed using conventional tools (frameworks), which could represent an improvement in performance of the applications.