1000 resultados para Ciência da computação
Resumo:
Soft skills and teamwork practices were identi ed as the main de ciencies of recent graduates in computer courses. This issue led to a realization of a qualitative research aimed at investigating the challenges faced by professors of those courses in conducting, monitoring and assessing collaborative software development projects. Di erent challenges were reported by teachers, including di culties in the assessment of students both in the collective and individual levels. In this context, a quantitative research was conducted with the aim to map soft skill of students to a set of indicators that can be extracted from software repositories using data mining techniques. These indicators are aimed at measuring soft skills, such as teamwork, leadership, problem solving and the pace of communication. Then, a peer assessment approach was applied in a collaborative software development course of the software engineering major at the Federal University of Rio Grande do Norte (UFRN). This research presents a correlation study between the students' soft skills scores and indicators based on mining software repositories. This study contributes: (i) in the presentation of professors' perception of the di culties and opportunities for improving management and monitoring practices in collaborative software development projects; (ii) in investigating relationships between soft skills and activities performed by students using software repositories; (iii) in encouraging the development of soft skills and the use of software repositories among software engineering students; (iv) in contributing to the state of the art of three important areas of software engineering, namely software engineering education, educational data mining and human aspects of software engineering.
Resumo:
The substantial increase in the number of applications offered through the computer networks, as well as in the volume of traffic forwarded through the network, have hampered to assure adequate service level to users. The Quality of Service (QoS) offer, honoring specified parameters in Service Level Agreements (SLA), established between the service providers and their clients, composes a traditional and extensive computer networks’ research area. Several schemes proposals for the provision of QoS were presented in the last three decades, but the acting scope of these proposals is always limited due to some factors, including the limited development of the network hardware and software, generally belonging to a single manufacturer. The advent of Software Defined Networking (SDN), along with the maturation of its main materialization, the OpenFlow protocol, allowed the decoupling between network hardware and software, through an architecture which provides a control plane and a data plane. This eases the computer networks scenario, allowing that new abstractions are applied in the hardware composing the data plane, through the development of new software pieces which are executed in the control plane. This dissertation investigates the QoS offer through the use and extension of the SDN architecture. Based on the proposal of two new modules, one to perform the data plane monitoring, SDNMon, and the second, MP-ROUTING, developed to determine the use of multiple paths in the forwarding of data referring to a flow, we demonstrated in this work that some QoS metrics specified in the SLAs, such as bandwidth, can be honored. Both modules were implemented and evaluated through a prototype. The evaluation results referring to several aspects of both proposed modules are presented in this dissertation, showing the obtained accuracy of the monitoring module SDNMon and the QoS gains due to the utilization of multiple paths defined by the MP-Routing, when forwarding data flow through the SDN.
Resumo:
Software bug analysis is one of the most important activities in Software Quality. The rapid and correct implementation of the necessary repair influence both developers, who must leave the fully functioning software, and users, who need to perform their daily tasks. In this context, if there is an incorrect classification of bugs, there may be unwanted situations. One of the main factors to be assigned bugs in the act of its initial report is severity, which lives up to the urgency of correcting that problem. In this scenario, we identified in datasets with data extracted from five open source systems (Apache, Eclipse, Kernel, Mozilla and Open Office), that there is an irregular distribution of bugs with respect to existing severities, which is an early sign of misclassification. In the dataset analyzed, exists a rate of about 85% bugs being ranked with normal severity. Therefore, this classification rate can have a negative influence on software development context, where the misclassified bug can be allocated to a developer with little experience to solve it and thus the correction of the same may take longer, or even generate a incorrect implementation. Several studies in the literature have disregarded the normal bugs, working only with the portion of bugs considered severe or not severe initially. This work aimed to investigate this portion of the data, with the purpose of identifying whether the normal severity reflects the real impact and urgency, to investigate if there are bugs (initially classified as normal) that could be classified with other severity, and to assess if there are impacts for developers in this context. For this, an automatic classifier was developed, which was based on three algorithms (Näive Bayes, Max Ent and Winnow) to assess if normal severity is correct for the bugs categorized initially with this severity. The algorithms presented accuracy of about 80%, and showed that between 21% and 36% of the bugs should have been classified differently (depending on the algorithm), which represents somewhere between 70,000 and 130,000 bugs of the dataset.
Greenow: um algoritmo de roteamento orientado a workspace para uma arquitetura de Internet do futuro
Resumo:
Current and future applications pose new requirements that Internet architecture is not able to satisfy, like Mobility, Multicast, Multihoming, Bandwidth Guarantee and so on. The Internet architecture has some limitations which do not allow all future requirements to be covered. New architectures were proposed considering these requirements when a communication is established. ETArch (Entity Title Architecture) is a new Internet architecture, clean slate, able to use application’s requirements on each communication, and flexible to work with several layers. The Routing has an important role on Internet, because it decides the best way to forward primitives through the network. In Future Internet, all requirements depend on the routing. Routing is responsible for deciding the best path and, in the future, a better route can consider Mobility aspects or Energy Consumption, for instance. In the dawn of ETArch, the Routing has not been defined. This work provides intra and inter-domain routing algorithms to be used in the ETArch. It is considered that the route should be defined completely before the data start to traffic, to ensure that the requirements are met. In the Internet, the Routing has two distinct functions: (i) run specific algorithms to define the best route; and (ii) to forward data primitives to the correct link. In traditional Internet architecture, the two Routing functions are performed in all routers everytime that a packet arrives. This work allows that the complete route is defined before the communication starts, like in the telecommunication systems. This work determined the Routing for ETArch and experiments were performed to demonstrate the control plane routing viability. The initial setup before a communication takes longer, then only forwarding of primitives is performed, saving processing time.
Resumo:
Due to the growing use of social networks people no longer just consume data, they also produce and share it. Geo-tagged information, i.e., data with geographical location, have been used in many attempts to identify popular places and help tourists that will visit unfamiliar cities. This Master Thesis presents an online strategy that uses geo-tagged photos and their metadata in order to identify places of interest inside a given geographical area and retrieve relevant related information. The whole process runs automatically in real time, returning updated information about places. The proposed strategy takes into account the inherent dynamism of social media, and thus is robust under inconsistencies and/or outdated information, a common issue in solutions that rely on previously stored data. The analysis of the results showed that our approach is very promising, returning places that present high agreement with those from a popular travel website.
Resumo:
One of the most common forms of reuse is through API usage. However, one of the main challenges to effective usage is an accessible and easy to understand documentation. Several papers have proposed alternatives to make more understandable API documentation, or even more detailed. However, these studies have not taken into account the complexity of understanding of the examples to make these documentations adaptable to different levels of experience of developers. In this work we developed and evaluated four different methodologies to generate tutorials for APIs from the contents of Stack Overflow and organizing them according to the complexity of understanding. The methodologies were evaluated through tutorials generated for the Swing API. A survey was conducted to evaluate eight different features of the generated tutorials. The overall outcome of the tutorials was positive on several characteristics, showing the feasibility of the use of tutorials generated automatically. In addition, the use of criteria for presentation of tutorial elements in order of complexity, the separation of the tutorial in basic and advanced parts, the nature of tutorial to the selected posts and existence of didactic source had significantly different results regarding a chosen generation methodology. A second study compared the official documentation of the Android API and tutorial generated by the best methodology of the previous study. A controlled experiment was conducted with students who had a first contact with the Android development. In the experiment these students developed two tasks, one using the official documentation of Android and using the generated tutorial. The results of this experiment showed that in most cases, the students had the best performance in tasks when they used the tutorial proposed in this work. The main reasons for the poor performance of students in tasks using the official API documentation were due to lack of usage examples, as well as its difficult use.
Resumo:
Nowadays, the amount of customers using sites for shopping is greatly increasing, mainly due to the easiness and rapidity of this way of consumption. The sites, differently from physical stores, can make anything available to customers. In this context, Recommender Systems (RS) have become indispensable to help consumers to find products that may possibly pleasant or be useful to them. These systems often use techniques of Collaborating Filtering (CF), whose main underlying idea is that products are recommended to a given user based on purchase information and evaluations of past, by a group of users similar to the user who is requesting recommendation. One of the main challenges faced by such a technique is the need of the user to provide some information about her preferences on products in order to get further recommendations from the system. When there are items that do not have ratings or that possess quite few ratings available, the recommender system performs poorly. This problem is known as new item cold-start. In this paper, we propose to investigate in what extent information on visual attention can help to produce more accurate recommendation models. We present a new CF strategy, called IKB-MS, that uses visual attention to characterize images and alleviate the new item cold-start problem. In order to validate this strategy, we created a clothing image database and we use three algorithms well known for the extraction of visual attention these images. An extensive set of experiments shows that our approach is efficient and outperforms state-of-the-art CF RS.
Resumo:
The content-based image retrieval is important for various purposes like disease diagnoses from computerized tomography, for example. The relevance, social and economic of image retrieval systems has created the necessity of its improvement. Within this context, the content-based image retrieval systems are composed of two stages, the feature extraction and similarity measurement. The stage of similarity is still a challenge due to the wide variety of similarity measurement functions, which can be combined with the different techniques present in the recovery process and return results that aren’t always the most satisfactory. The most common functions used to measure the similarity are the Euclidean and Cosine, but some researchers have noted some limitations in these functions conventional proximity, in the step of search by similarity. For that reason, the Bregman divergences (Kullback Leibler and I-Generalized) have attracted the attention of researchers, due to its flexibility in the similarity analysis. Thus, the aim of this research was to conduct a comparative study over the use of Bregman divergences in relation the Euclidean and Cosine functions, in the step similarity of content-based image retrieval, checking the advantages and disadvantages of each function. For this, it was created a content-based image retrieval system in two stages: offline and online, using approaches BSM, FISM, BoVW and BoVW-SPM. With this system was created three groups of experiments using databases: Caltech101, Oxford and UK-bench. The performance of content-based image retrieval system using the different functions of similarity was tested through of evaluation measures: Mean Average Precision, normalized Discounted Cumulative Gain, precision at k, precision x recall. Finally, this study shows that the use of Bregman divergences (Kullback Leibler and Generalized) obtains better results than the Euclidean and Cosine measures with significant gains for content-based image retrieval.
Resumo:
Os museus são instituições que desempenham um importante papel para a sociedade, com seus acervos de grande valor cultural e científico. É dever dos museus promover o acesso aos acervos e realizar ações de comunicação para divulgação e acesso público aos bens culturais que compõem suas coleções. Os museus vêm empregando a Tecnologia da Informação e Comunicação para apoiar suas atividades, ampliar o leque de serviços prestados à sociedade, promover a cultura, ciência e conhecimento, divulgar e disponibilizar seus acervos por meio da Web. Para disponibilizar as informações de acervos de museus, tornando uma navegação mais intuitiva e natural, e possibilitar a troca de informações entre os museus, visando a Recuperação da Informação, o reuso e interoperabilidade dos dados, é preciso adaptá-las para o formato da Web Semântica. Este estudo propõe uma solução para integrar os dados de acervos da Rede de Museus e Espaços de Ciências e Cultura da Universidade Federal de Minas Gerais e disponibilizá-los na Web, utilizando conceitos da Web Semântica e Linked Data. Para atingir esse objetivo, será desenvolvido um estudo experimental e um protótipo de aplicação para validá-lo e responder à pergunta de competência.
Resumo:
As empresas utilizam vários tipos de recurso para suas operações, logística, recursos financeiros, recursos humanos bem como recursos para gerar seu portfólio de produtos e serviços. No último século os gestores de empresas desenvolveram práticas e ferramentas para maximizar os resultados decorrentes da aplicação destes recursos. Esta dissertação aborda o problema da falta conhecimento a respeito do processo de mapeamento e gestão do conhecimento dentro de uma IES e objetiva propor um protótipo de mapeamento e gestão do conhecimento em uma instituição de ensino superior, para identificar as competências presentes dentro de uma IES. Esse trabalho está inserido no contexto organizacional de uma IES que, devido às constantes evoluções mercadológicas, necessita saber qual é o seu capital humano, o conhecimento que possui e quais são esses, pautado em uma ferramenta computacional. De acordo com o exposto, a dissertação segue o perfil acadêmico criando um protótipo que pode ser aplicado ao viés mercadológico. A importância da pesquisa acadêmica pode ser pautada no conhecimento que foi gerado a partir do estudo e da análise para a elaboração da dissertação; o conhecimento poderá ser replicado em outras IES, pois, a partir do protótipo construído e aplicado em uma Instituição de Ensino Superior, sua evolução ao ser utilizado/aprimorado em outra IES seria natural. A proposta e implementação de um protótipo com requisitos básicos apresentados nesta dissertação, possuem vários pontos em comum com a gestão do conhecimento, em que a ideia de desenvolver um protótipo de sistema de mapeamento de conhecimento nasce a partir de um repositório de dados (Plataforma Lattes), no qual os pesquisadores, cientistas, e alunos de programas de pós-graduação possuem seus currículos armazenados. As pesquisas e o aprimoramento técnico adquiridos com a construção dessa dissertação, com o objetivo de apresentar um protótipo de mapeamento do conhecimento utilizando ferramentas de código aberto (não pagas) para o seu desenvolvimento, tem a finalidade de identificar as competências presentes dentro de uma IES.
Resumo:
A tecnologia democratizou o acesso à informação, a competitividade das empresas do século XXI cada dia mais se baseia em conhecimento e as gestões das ações da administração pública ampliam a busca por um sistema coordenado, preocupado com a gestão eficiente que possa maximizar o retorno social, levando-se em consideração os direitos humanos fundamentais na esfera das relações informáticas. A administração pública avança em seus processos de aproximação com a sociedade, pautando-se cada dia mais nos benefícios proporcionados pela evolução dos processos de Tecnologia da Informação e Comunicação (TICs), os serviços disponibilizados pelo governo eletrônico vêm ganhando espaço na vida da população (G2C). Diante desta evolução os aspectos de acessibilidade são uma diretriz clara e o Decreto Lei nº 5.296, define como barreiras nas comunicações e informações “qualquer entrave ou obstáculo que dificulte ou impossibilite a expressão ou o recebimento de mensagens por intermédio dos dispositivos, meios ou sistemas de comunicação, sejam ou não de massa, bem como aqueles que dificultem ou impossibilitem o acesso à informação; “ (BRASIL 2004). Após mais de 10 anos de promulgação da lei de Acessibilidade, é evidente que a qualidade dos serviços prestados pelo governo eletrônico só será efetiva se o instrumento de comunicação utilizado for realmente o facilitador da interação entre o cidadão e o governo, sem distinção do público utilizador de tais serviços. Esta pesquisa pretende avaliar aspectos de acessibilidade dos principais portais públicos brasileiros, sob a ótica de pessoas com limitação visual, avaliando ações, atividades e iniciativas conforme a abordagem da Engenharia Semiótica em prol das tecnologias assistivas de inclusão, permitindo a recomendação de requisitos para a construção de Portais Web Acessíveis em Multiplataforma, favorecendo a inclusão digital e maior independência dos mesmos.
Resumo:
As pesquisas e as práticas em inteligência analítica na Engenharia de Software têm crescido nas últimas décadas. As informações contidas em um repositório de software podem auxiliar engenheiros de software em suas atividades durante todas as fases do desenvolvimento de software. O uso da inteligência analítica está ajudando os profissionais da Engenharia de Software a obterem informações relevantes do repositório de software, direcionando-os para melhores tomadas de decisões. Por se tratar de um bem intangível, pode ser difícil compreender as informações geradas pelo software. Este trabalho realizou um mapeamento sistemático da literatura sobre inteligência analítica na Engenharia de Software, o que propiciou a elaboração de um arcabouço conceitual para utilização da inteligência analítica capaz de auxiliar nas atividades da Engenharia de Software. Com a finalidade de validar este arcabouço conceitual, foi construído um protótipo de uma aplicação que analisou dados de um software livre. Tal protótipo foi validado e comentado por um grupo focal formado por desenvolvedores e gestores de projetos de software de uma grande empresa da área de Tecnologia da Informação. Concluiu-se que a inteligência analítica é fortemente utilizada durante a fase de manutenção e vem crescendo sua utilização na área de Gestão e na Prática Profissional. Constatou-se que os commits podem ser bons indicadores da evolução de software e que a ferramenta desenvolvida neste trabalho permite compreender o que está sendo alterado no sistema e por que a alteração ocorreu.
Resumo:
Business Process Management (BPM) is able to organize and frame a company focusing in the improvement or assurance of performance in order to gain competitive advantage. Although it is believed that BPM improves various aspects of organizational performance, there has been a lack of empirical evidence about this. The present study has the purpose to develop a model to show the impact of business process management in organizational performance. To accomplish that, the theoretical basis required to know the elements that configurate BPM and the measures that can evaluate the BPM success on organizational performance is built through a systematic literature review (SLR). Then, a research model is proposed according to SLR results. Empirical data will be collected from a survey of larg and mid-sized industrial and service companies headquartered in Brazil. A quantitative analysis will be performed using structural equation modeling (SEM) to show if the direct effects among BPM and organizational performance can be considered statistically significant. At the end will discuss these results and their managerial and cientific implications.Keywords: Business process management (BPM). Organizational performance. Firm performance. Business models. Structural Equation Modeling. Systematic Literature Review.
Resumo:
The growing pressure to increase the quality of health services, as well as reducing costs, has caused healthcare organizations to increase the use of Information and Communication Technologies (ICT) through the development and adoption of Healthcare Information Systems (HIS). However, the need for exchange of information between HIS and between organizations has also increased, resulting in the problem of interoperability. This problem is considered complex, but the use of Service Oriented Architecture (SOA) appears as a good way to address this issue. This paper presents a systematic review, performed in order to find out how and in which contexts SOA is being used to ensure the interoperability of HIS.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2015.