8 resultados para Ensino de ciência da computação e engenharia de software

em Universidade Federal de Uberlândia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently the Science fairs in Brazil have gained great incentive, examples are the regulations that the government has been implementing in education and the financing of public calls for events throughout the national territory. However, even with this incentive, some researchers point out that the scientific fairs and shows are still interpreted as an extemporaneous work by teachers. In order to know the views of basic education teachers about the fairs of Science, proposed to carry out this research. Given this situation, based mediation theory and sociocultural interaction Vygotsky (2001), the theory of instrumentalism Dewey (2002) and the proposed education through research Galiazzi e Moraes (2002), we sought to understand the importance of fair and their benefits as well as the presence in the talks of respondents. In order to analyze the answers of respondents, used to discourse analysis proposed by Eni Orlandi (2009), in which it is observed and is an interpretation of the speech of teachers, considering their interpretation and how to shape their thinking on the research object. In analyzing the results of the survey, it was noted that the teachers interviewed know the importance and objectives of science fairs, however experience difficulties that often does not allow these events to be carried out. In seeking to assist them to minimize these difficulties, it was realized the need for a product to make available guidance on how to develop research projects and assemblies of science fairs, that would provide an education for the research. Thus, resulting from research, was set up a blog and a booklet with texts, articles and report templates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The substantial increase in the number of applications offered through the computer networks, as well as in the volume of traffic forwarded through the network, have hampered to assure adequate service level to users. The Quality of Service (QoS) offer, honoring specified parameters in Service Level Agreements (SLA), established between the service providers and their clients, composes a traditional and extensive computer networks’ research area. Several schemes proposals for the provision of QoS were presented in the last three decades, but the acting scope of these proposals is always limited due to some factors, including the limited development of the network hardware and software, generally belonging to a single manufacturer. The advent of Software Defined Networking (SDN), along with the maturation of its main materialization, the OpenFlow protocol, allowed the decoupling between network hardware and software, through an architecture which provides a control plane and a data plane. This eases the computer networks scenario, allowing that new abstractions are applied in the hardware composing the data plane, through the development of new software pieces which are executed in the control plane. This dissertation investigates the QoS offer through the use and extension of the SDN architecture. Based on the proposal of two new modules, one to perform the data plane monitoring, SDNMon, and the second, MP-ROUTING, developed to determine the use of multiple paths in the forwarding of data referring to a flow, we demonstrated in this work that some QoS metrics specified in the SLAs, such as bandwidth, can be honored. Both modules were implemented and evaluated through a prototype. The evaluation results referring to several aspects of both proposed modules are presented in this dissertation, showing the obtained accuracy of the monitoring module SDNMon and the QoS gains due to the utilization of multiple paths defined by the MP-Routing, when forwarding data flow through the SDN.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software bug analysis is one of the most important activities in Software Quality. The rapid and correct implementation of the necessary repair influence both developers, who must leave the fully functioning software, and users, who need to perform their daily tasks. In this context, if there is an incorrect classification of bugs, there may be unwanted situations. One of the main factors to be assigned bugs in the act of its initial report is severity, which lives up to the urgency of correcting that problem. In this scenario, we identified in datasets with data extracted from five open source systems (Apache, Eclipse, Kernel, Mozilla and Open Office), that there is an irregular distribution of bugs with respect to existing severities, which is an early sign of misclassification. In the dataset analyzed, exists a rate of about 85% bugs being ranked with normal severity. Therefore, this classification rate can have a negative influence on software development context, where the misclassified bug can be allocated to a developer with little experience to solve it and thus the correction of the same may take longer, or even generate a incorrect implementation. Several studies in the literature have disregarded the normal bugs, working only with the portion of bugs considered severe or not severe initially. This work aimed to investigate this portion of the data, with the purpose of identifying whether the normal severity reflects the real impact and urgency, to investigate if there are bugs (initially classified as normal) that could be classified with other severity, and to assess if there are impacts for developers in this context. For this, an automatic classifier was developed, which was based on three algorithms (Näive Bayes, Max Ent and Winnow) to assess if normal severity is correct for the bugs categorized initially with this severity. The algorithms presented accuracy of about 80%, and showed that between 21% and 36% of the bugs should have been classified differently (depending on the algorithm), which represents somewhere between 70,000 and 130,000 bugs of the dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current and future applications pose new requirements that Internet architecture is not able to satisfy, like Mobility, Multicast, Multihoming, Bandwidth Guarantee and so on. The Internet architecture has some limitations which do not allow all future requirements to be covered. New architectures were proposed considering these requirements when a communication is established. ETArch (Entity Title Architecture) is a new Internet architecture, clean slate, able to use application’s requirements on each communication, and flexible to work with several layers. The Routing has an important role on Internet, because it decides the best way to forward primitives through the network. In Future Internet, all requirements depend on the routing. Routing is responsible for deciding the best path and, in the future, a better route can consider Mobility aspects or Energy Consumption, for instance. In the dawn of ETArch, the Routing has not been defined. This work provides intra and inter-domain routing algorithms to be used in the ETArch. It is considered that the route should be defined completely before the data start to traffic, to ensure that the requirements are met. In the Internet, the Routing has two distinct functions: (i) run specific algorithms to define the best route; and (ii) to forward data primitives to the correct link. In traditional Internet architecture, the two Routing functions are performed in all routers everytime that a packet arrives. This work allows that the complete route is defined before the communication starts, like in the telecommunication systems. This work determined the Routing for ETArch and experiments were performed to demonstrate the control plane routing viability. The initial setup before a communication takes longer, then only forwarding of primitives is performed, saving processing time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the growing use of social networks people no longer just consume data, they also produce and share it. Geo-tagged information, i.e., data with geographical location, have been used in many attempts to identify popular places and help tourists that will visit unfamiliar cities. This Master Thesis presents an online strategy that uses geo-tagged photos and their metadata in order to identify places of interest inside a given geographical area and retrieve relevant related information. The whole process runs automatically in real time, returning updated information about places. The proposed strategy takes into account the inherent dynamism of social media, and thus is robust under inconsistencies and/or outdated information, a common issue in solutions that rely on previously stored data. The analysis of the results showed that our approach is very promising, returning places that present high agreement with those from a popular travel website.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the most common forms of reuse is through API usage. However, one of the main challenges to effective usage is an accessible and easy to understand documentation. Several papers have proposed alternatives to make more understandable API documentation, or even more detailed. However, these studies have not taken into account the complexity of understanding of the examples to make these documentations adaptable to different levels of experience of developers. In this work we developed and evaluated four different methodologies to generate tutorials for APIs from the contents of Stack Overflow and organizing them according to the complexity of understanding. The methodologies were evaluated through tutorials generated for the Swing API. A survey was conducted to evaluate eight different features of the generated tutorials. The overall outcome of the tutorials was positive on several characteristics, showing the feasibility of the use of tutorials generated automatically. In addition, the use of criteria for presentation of tutorial elements in order of complexity, the separation of the tutorial in basic and advanced parts, the nature of tutorial to the selected posts and existence of didactic source had significantly different results regarding a chosen generation methodology. A second study compared the official documentation of the Android API and tutorial generated by the best methodology of the previous study. A controlled experiment was conducted with students who had a first contact with the Android development. In the experiment these students developed two tasks, one using the official documentation of Android and using the generated tutorial. The results of this experiment showed that in most cases, the students had the best performance in tasks when they used the tutorial proposed in this work. The main reasons for the poor performance of students in tasks using the official API documentation were due to lack of usage examples, as well as its difficult use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, the amount of customers using sites for shopping is greatly increasing, mainly due to the easiness and rapidity of this way of consumption. The sites, differently from physical stores, can make anything available to customers. In this context, Recommender Systems (RS) have become indispensable to help consumers to find products that may possibly pleasant or be useful to them. These systems often use techniques of Collaborating Filtering (CF), whose main underlying idea is that products are recommended to a given user based on purchase information and evaluations of past, by a group of users similar to the user who is requesting recommendation. One of the main challenges faced by such a technique is the need of the user to provide some information about her preferences on products in order to get further recommendations from the system. When there are items that do not have ratings or that possess quite few ratings available, the recommender system performs poorly. This problem is known as new item cold-start. In this paper, we propose to investigate in what extent information on visual attention can help to produce more accurate recommendation models. We present a new CF strategy, called IKB-MS, that uses visual attention to characterize images and alleviate the new item cold-start problem. In order to validate this strategy, we created a clothing image database and we use three algorithms well known for the extraction of visual attention these images. An extensive set of experiments shows that our approach is efficient and outperforms state-of-the-art CF RS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The content-based image retrieval is important for various purposes like disease diagnoses from computerized tomography, for example. The relevance, social and economic of image retrieval systems has created the necessity of its improvement. Within this context, the content-based image retrieval systems are composed of two stages, the feature extraction and similarity measurement. The stage of similarity is still a challenge due to the wide variety of similarity measurement functions, which can be combined with the different techniques present in the recovery process and return results that aren’t always the most satisfactory. The most common functions used to measure the similarity are the Euclidean and Cosine, but some researchers have noted some limitations in these functions conventional proximity, in the step of search by similarity. For that reason, the Bregman divergences (Kullback Leibler and I-Generalized) have attracted the attention of researchers, due to its flexibility in the similarity analysis. Thus, the aim of this research was to conduct a comparative study over the use of Bregman divergences in relation the Euclidean and Cosine functions, in the step similarity of content-based image retrieval, checking the advantages and disadvantages of each function. For this, it was created a content-based image retrieval system in two stages: offline and online, using approaches BSM, FISM, BoVW and BoVW-SPM. With this system was created three groups of experiments using databases: Caltech101, Oxford and UK-bench. The performance of content-based image retrieval system using the different functions of similarity was tested through of evaluation measures: Mean Average Precision, normalized Discounted Cumulative Gain, precision at k, precision x recall. Finally, this study shows that the use of Bregman divergences (Kullback Leibler and Generalized) obtains better results than the Euclidean and Cosine measures with significant gains for content-based image retrieval.