868 resultados para Debugging in computer science.
Resumo:
"TID-26500/R1. Distribution category: UC-2."
Resumo:
The SimProgramming teaching approach has the goal to help students overcome their learning difficulties in the transition from entry-level to advanced computer programming and prepare them for real-world labour environments, adopting learning strategies. It immerses learners in a businesslike learning environment, where students develop a problem-based learning activity with a specific set of tasks, one of which is filling weekly individual forms. We conducted thematic analysis of 401 weekly forms, to identify the students’ strategies for self-regulation of learning during assignment. The students are adopting different strategies in each phase of the approach. The early phases are devoted to organization and planning, later phases focus on applying theoretical knowledge and hands-on programming. Based on the results, we recommend the development of educational practices to help students conduct self-reflection of their performance during tasks.
Resumo:
Growing models have been widely used for clustering or topology learning. Traditionally these models work on stationary environments, grow incrementally and adapt their nodes to a given distribution based on global parameters. In this paper, we present an enhanced unsupervised self-organising network for the modelling of visual objects. We first develop a framework for building non-rigid shapes using the growth mechanism of the self-organising maps, and then we define an optimal number of nodes without overfitting or underfitting the network based on the knowledge obtained from information-theoretic considerations. We present experimental results for hands and we quantitatively evaluate the matching capabilities of the proposed method with the topographic product.
Resumo:
Significant advances in science should be given to addressing the needs of society and the historical context of the territories. Although technological developments that began with modernity and the industrial revolution allowed human beings to control the resources of nature to put to your service without limits, it is clear that the crisis of the prevailing development models manifest themselves in many ways but with three common denominators: environmental degradation, social injustice and extreme poverty. Consequently, today should not be possible to think a breakthrough in the development of science without addressing global environmental problems and the deep social injustices that increase at all scales under the gaze, impassively in many occasions, of formal science.
MINING AND VERIFICATION OF TEMPORAL EVENTS WITH APPLICATIONS IN COMPUTER MICRO-ARCHITECTURE RESEARCH
Resumo:
Computer simulation programs are essential tools for scientists and engineers to understand a particular system of interest. As expected, the complexity of the software increases with the depth of the model used. In addition to the exigent demands of software engineering, verification of simulation programs is especially challenging because the models represented are complex and ridden with unknowns that will be discovered by developers in an iterative process. To manage such complexity, advanced verification techniques for continually matching the intended model to the implemented model are necessary. Therefore, the main goal of this research work is to design a useful verification and validation framework that is able to identify model representation errors and is applicable to generic simulators. The framework that was developed and implemented consists of two parts. The first part is First-Order Logic Constraint Specification Language (FOLCSL) that enables users to specify the invariants of a model under consideration. From the first-order logic specification, the FOLCSL translator automatically synthesizes a verification program that reads the event trace generated by a simulator and signals whether all invariants are respected. The second part consists of mining the temporal flow of events using a newly developed representation called State Flow Temporal Analysis Graph (SFTAG). While the first part seeks an assurance of implementation correctness by checking that the model invariants hold, the second part derives an extended model of the implementation and hence enables a deeper understanding of what was implemented. The main application studied in this work is the validation of the timing behavior of micro-architecture simulators. The study includes SFTAGs generated for a wide set of benchmark programs and their analysis using several artificial intelligence algorithms. This work improves the computer architecture research and verification processes as shown by the case studies and experiments that have been conducted.
Resumo:
The ontology engineering research community has focused for many years on supporting the creation, development and evolution of ontologies. Ontology forecasting, which aims at predicting semantic changes in an ontology, represents instead a new challenge. In this paper, we want to give a contribution to this novel endeavour by focusing on the task of forecasting semantic concepts in the research domain. Indeed, ontologies representing scientific disciplines contain only research topics that are already popular enough to be selected by human experts or automatic algorithms. They are thus unfit to support tasks which require the ability of describing and exploring the forefront of research, such as trend detection and horizon scanning. We address this issue by introducing the Semantic Innovation Forecast (SIF) model, which predicts new concepts of an ontology at time t + 1, using only data available at time t. Our approach relies on lexical innovation and adoption information extracted from historical data. We evaluated the SIF model on a very large dataset consisting of over one million scientific papers belonging to the Computer Science domain: the outcomes show that the proposed approach offers a competitive boost in mean average precision-at-ten compared to the baselines when forecasting over 5 years.
Resumo:
Prolonged high-intensity training seems to result in increased systemic inflammation, which might explain muscle injury, delayed onset muscle soreness, and overtraining syndrome in athletes. Furthermore, an impaired immune function caused by strenuous exercise leads to the development of upper respiratory tract infections in athletes. Nutraceuticals might help counteract these performance-lowering effects. The use of nanotechnology is an interesting alternative to supply athletes with nutraceuticals, as many of these substances are insoluble in water and are poorly absorbed in the digestive tract. The present chapter starts with a brief review of the effects of exercise on immunity, followed by an analysis on how nutraceuticals such as omega-3 fatty acids, glutamine, BCAAs, or phytochemicals can counteract negative effects of strenuous exercise in athletes. Finally, how nanostructured delivery systems can constitute a new trend in enhancing bioavailability and optimizing the action of nutraceuticals will be discussed, using the example of food beverages.
Resumo:
Resuscitation and stabilization are key issues in Intensive Care Burn Units and early survival predictions help to decide the best clinical action during these phases. Current survival scores of burns focus on clinical variables such as age or the body surface area. However, the evolution of other parameters (e.g. diuresis or fluid balance) during the first days is also valuable knowledge. In this work we suggest a methodology and we propose a Temporal Data Mining algorithm to estimate the survival condition from the patient’s evolution. Experiments conducted on 480 patients show the improvement of survival prediction.
Resumo:
In an organisation any optimization process of its issues faces increasing challenges and requires new approaches to the organizational phenomenon. Indeed, in this work it is addressed the problematic of efficiency dynamics through intangible variables that may support a different view of the corporations. It focuses on the challenges that information management and the incorporation of context brings to competitiveness. Thus, in this work it is presented the analysis and development of an intelligent decision support system in terms of a formal agenda built on a Logic Programming based methodology to problem solving, complemented with an attitude to computing grounded on Artificial Neural Networks. The proposed model is in itself fairly precise, with an overall accuracy, sensitivity and specificity with values higher than 90 %. The proposed solution is indeed unique, catering for the explicit treatment of incomplete, unknown, or even self-contradictory information, either in a quantitative or qualitative arrangement.
Resumo:
This paper presents a study made in a field poorly explored in the Portuguese language – modality and its automatic tagging. Our main goal was to find a set of attributes for the creation of automatic tag- gers with improved performance over the bag-of-words (bow) approach. The performance was measured using precision, recall and F1. Because it is a relatively unexplored field, the study covers the creation of the corpus (composed by eleven verbs), the use of a parser to extract syntac- tic and semantic information from the sentences and a machine learning approach to identify modality values. Based on three different sets of attributes – from trigger itself and the trigger’s path (from the parse tree) and context – the system creates a tagger for each verb achiev- ing (in almost every verb) an improvement in F1 when compared to the traditional bow approach.
Resumo:
This article aims to reflect on language teaching in academic education in the Bachelor's Degree in Computer Science at the University of Mato Grosso (Unemat), Campus Colider held in the Practice of English Language Teaching. Search illustrate the approach of the educational reality with the use of technology through the English language, where their theoretical knowledge underlie actual practices in the construction of knowledge necessary for teacher education. The methodological procedures contemplated primarily a literature search, following the presentation of teaching experience of teaching English Language in Higher Education and its reflections that comprised a search field. The practices were entered into the Continuing Extension Project of Graduates and Graduates of the Computer Science Department of the University Campus Valley of Teles Pires (Colider), located in the northern region of Mato Grosso. The interdisciplinary approach encompassing the Practice Teaching and Extension Project was to develop activities that involve observation and reflection of the school reality, aiming at the performance in context, in this case the integration of educational games in the discipline of English Instrumental. The enrolled data indicated that the theoretical and practical knowledge, in view of literacies, new literacies, multiliteracies and critical literacies, enhances the quality of education. Finally, it is possible to signal that this practice, as a curriculum component of a degree course, offered analytical reflections on the educational space, using games as a tool for meaningful learning. This highlights the importance of the relationship between theory and practice in teacher training. Thus, Teaching Practices were a space of transformative praxis that sought to promote autonomy and preparation of critical-reflective teachers who are committed to their professional development.
Resumo:
Instead of the costly encryption algorithms traditionally employed in auction schemes, efficient Goldwasser-Micali encryption is used to design a new sealed-bid auction. Multiplicative homomorphism instead of the traditional additive homomorphism is exploited to achieve security and high efficiency in the auction. The new scheme is the currently known most efficient non-interactive sealed-bid auction with bid privacy.
Resumo:
A new solution to the millionaire problem is designed on the base of two new techniques: zero test and batch equation. Zero test is a technique used to test whether one or more ciphertext contains a zero without revealing other information. Batch equation is a technique used to test equality of multiple integers. Combination of these two techniques produces the only known solution to the millionaire problem that is correct, private, publicly verifiable and efficient at the same time.