840 resultados para Subroutines in Procedural Programming Languages
Resumo:
The State Key Laboratory of Computer Science (SKLCS) is committed to basic research in computer science and software engineering. The research topics of the laboratory include: concurrency theory, theory and algorithms for real-time systems, formal specifications based on context-free grammars, semantics of programming languages, model checking, automated reasoning, logic programming, software testing, software process improvement, middleware technology, parallel algorithms and parallel software, computer graphics and human-computer interaction. This paper describes these topics in some detail and summarizes some results obtained in recent years.
Resumo:
The constraint paradigm is a model of computation in which values are deduced whenever possible, under the limitation that deductions be local in a certain sense. One may visualize a constraint 'program' as a network of devices connected by wires. Data values may flow along the wires, and computation is performed by the devices. A device computes using only locally available information (with a few exceptions), and places newly derived values on other, locally attached wires. In this way computed values are propagated. An advantage of the constraint paradigm (not unique to it) is that a single relationship can be used in more than one direction. The connections to a device are not labelled as inputs and outputs; a device will compute with whatever values are available, and produce as many new values as it can. General theorem provers are capable of such behavior, but tend to suffer from combinatorial explosion; it is not usually useful to derive all the possible consequences of a set of hypotheses. The constraint paradigm places a certain kind of limitation on the deduction process. The limitations imposed by the constraint paradigm are not the only one possible. It is argued, however, that they are restrictive enough to forestall combinatorial explosion in many interesting computational situations, yet permissive enough to allow useful computations in practical situations. Moreover, the paradigm is intuitive: It is easy to visualize the computational effects of these particular limitations, and the paradigm is a natural way of expressing programs for certain applications, in particular relationships arising in computer-aided design. A number of implementations of constraint-based programming languages are presented. A progression of ever more powerful languages is described, complete implementations are presented and design difficulties and alternatives are discussed. The goal approached, though not quite reached, is a complete programming system which will implicitly support the constraint paradigm to the same extent that LISP, say, supports automatic storage management.
Resumo:
The work reported here lies in the area of overlap between artificial intelligence software engineering. As research in artificial intelligence, it is a step towards a model of problem solving in the domain of programming. In particular, this work focuses on the routine aspects of programming which involve the application of previous experience with similar programs. I call this programming by inspection. Programming is viewed here as a kind of engineering activity. Analysis and synthesis by inspection area prominent part of expert problem solving in many other engineering disciplines, such as electrical and mechanical engineering. The notion of inspections methods in programming developed in this work is motivated by similar notions in other areas of engineering. This work is also motivated by current practical concerns in the area of software engineering. The inadequacy of current programming technology is universally recognized. Part of the solution to this problem will be to increase the level of automation in programming. I believe that the next major step in the evolution of more automated programming will be interactive systems which provide a mixture of partially automated program analysis, synthesis and verification. One such system being developed at MIT, called the programmer's apprentice, is the immediate intended application of this work. This report concentrates on the knowledge are of the programmer's apprentice, which is the form of a taxonomy of commonly used algorithms and data structures. To the extent that a programmer is able to construct and manipulate programs in terms of the forms in such a taxonomy, he may relieve himself of many details and generally raise the conceptual level of his interaction with the system, as compared with present day programming environments. Also, since it is practical to expand a great deal of effort pre-analyzing the entries in a library, the difficulty of verifying the correctness of programs constructed this way is correspondingly reduced. The feasibility of this approach is demonstrated by the design of an initial library of common techniques for manipulating symbolic data. This document also reports on the further development of a formalism called the plan calculus for specifying computations in a programming language independent manner. This formalism combines both data and control abstraction in a uniform framework that has facilities for representing multiple points of view and side effects.
Resumo:
Inferring types for polymorphic recursive function definitions (abbreviated to polymorphic recursion) is a recurring topic on the mailing lists of popular typed programming languages. This is despite the fact that type inference for polymorphic recursion using for all-types has been proved undecidable. This report presents several programming examples involving polymorphic recursion and determines their typability under various type systems, including the Hindley-Milner system, an intersection-type system, and extensions of these two. The goal of this report is to show that many of these examples are typable using a system of intersection types as an alternative form of polymorphism. By accomplishing this, we hope to lay the foundation for future research into a decidable intersection-type inference algorithm. We do not provide a comprehensive survey of type systems appropriate for polymorphic recursion, with or without type annotations inserted in the source language. Rather, we focus on examples for which types may be inferred without type annotations.
Exploring processes of indeterminate determinism in music composition, programming and improvisation
Resumo:
This portfolio consists of 15 original musical works. Taking the form of electronic and acousmatic music, multimedia, and scores, these chamber works serve as a result of experimentation and improvisation with individually built computer interfaces. The accompanying commentary provides discourse on the conceptual practice of these interfaces becoming a compositional entity that present a multi-interpretative opportunity to explore, engage, and personalise. Following this, the commentary examines the path of creative decisions and musical choices that formed both these interfaces and the resulting musical and visual works. This portfolio is accompanied by interfaces used, transcoded interfacing behavioural information, and documented improvisational findings.
Resumo:
This work considers the static calculation of a program’s average-case time. The number of systems that currently tackle this research problem is quite small due to the difficulties inherent in average-case analysis. While each of these systems make a pertinent contribution, and are individually discussed in this work, only one of them forms the basis of this research. That particular system is known as MOQA. The MOQA system consists of the MOQA language and the MOQA static analysis tool. Its technique for statically determining average-case behaviour centres on maintaining strict control over both the data structure type and the labeling distribution. This research develops and evaluates the MOQA language implementation, and adds to the functions already available in this language. Furthermore, the theory that backs MOQA is generalised and the range of data structures for which the MOQA static analysis tool can determine average-case behaviour is increased. Also, some of the MOQA applications and extensions suggested in other works are logically examined here. For example, the accuracy of classifying the MOQA language as reversible is investigated, along with the feasibility of incorporating duplicate labels into the MOQA theory. Finally, the analyses that take place during the course of this research reveal some of the MOQA strengths and weaknesses. This thesis aims to be pragmatic when evaluating the current MOQA theory, the advancements set forth in the following work and the benefits of MOQA when compared to similar systems. Succinctly, this work’s significant expansion of the MOQA theory is accompanied by a realistic assessment of MOQA’s accomplishments and a serious deliberation of the opportunities available to MOQA in the future.
Resumo:
The training and ongoing education of medical practitioners has undergone major changes in an incremental fashion over the past 15 years. These changes have been driven by patient safety, educational, economic and legislative/regulatory factors. In the near future, training in procedural skills will undergo a paradigm shift to proficiency based progression with associated requirements for competence-based programmes, valid, reliable assessment tools and simulation technology. Before training begins, the learning outcomes require clear definition; any form of assessment applied should include measurement of these outcomes. Currently training in a procedural skill often takes place on an ad hoc basis. The number of attempts necessary to attain a defined degree of proficiency varies from procedure to procedure. Convincing evidence exists that simulation training helps trainees to acquire skills more efficiently rather than relying on opportunities in their clinical practice. Simulation provides a safe, stress free environment for trainees for skill acquisition, generalization and transfer via deliberate practice. The work described in this thesis contributes to a greater understanding of how medical procedures can be performed more safely and effectively through education. The effect of feedback, provided to novices in a standardized setting on a bench model, based on knowledge of performance was associated with an increase in the speed of skill acquisition and a decrease in error rate during initial learning. The timing of feedback was also associated with effective learning of skill. A marked attrition of skills (independent of the type of feedback provided) was demonstrable 24 hrs after they have first been learned. Using the principles of feedback as described above, when studying the effect of an intense training program on novices of varied years of experience in anaesthesia (i.e. the present training programmes / courses of an intense training day for one or more procedures). There was a marked attrition of skill at 24 hours with a significant correlation with increasing years of experience; there also appeared to be an inverse relationship between years of experience in anaesthesia and performance. The greater the number of years of practice experience, the longer it required a learner to acquire a new skill. The findings of the studies described in this thesis may have important implications for the trainers, trainees and training bodies in the design and implementation of training courses and the formats of delivery of changing curricula. Both curricula and training modalities will need to take account of characteristics of individual learners and the dynamic nature of procedural healthcare.
Resumo:
BACKGROUND: With the globalization of clinical trials, a growing emphasis has been placed on the standardization of the workflow in order to ensure the reproducibility and reliability of the overall trial. Despite the importance of workflow evaluation, to our knowledge no previous studies have attempted to adapt existing modeling languages to standardize the representation of clinical trials. Unified Modeling Language (UML) is a computational language that can be used to model operational workflow, and a UML profile can be developed to standardize UML models within a given domain. This paper's objective is to develop a UML profile to extend the UML Activity Diagram schema into the clinical trials domain, defining a standard representation for clinical trial workflow diagrams in UML. METHODS: Two Brazilian clinical trial sites in rheumatology and oncology were examined to model their workflow and collect time-motion data. UML modeling was conducted in Eclipse, and a UML profile was developed to incorporate information used in discrete event simulation software. RESULTS: Ethnographic observation revealed bottlenecks in workflow: these included tasks requiring full commitment of CRCs, transferring notes from paper to computers, deviations from standard operating procedures, and conflicts between different IT systems. Time-motion analysis revealed that nurses' activities took up the most time in the workflow and contained a high frequency of shorter duration activities. Administrative assistants performed more activities near the beginning and end of the workflow. Overall, clinical trial tasks had a greater frequency than clinic routines or other general activities. CONCLUSIONS: This paper describes a method for modeling clinical trial workflow in UML and standardizing these workflow diagrams through a UML profile. In the increasingly global environment of clinical trials, the standardization of workflow modeling is a necessary precursor to conducting a comparative analysis of international clinical trials workflows.
Resumo:
Of key importance to oil and gas companies is the size distribution of fields in the areas that they are drilling. Recent arguments suggest that there are many more fields yet to be discovered in mature provinces than had previously been thought because the underlying distribution is monotonic not peaked. According to this view the peaked nature of the distribution for discovered fields reflects not the underlying distribution but the effect of economic truncation. This paper contributes to the discussion by analysing up-to-date exploration and discovery data for two mature provinces using the discovery-process model, based on sampling without replacement and implicitly including economic truncation effects. The maximum likelihood estimation involved generates a high-dimensional mixed-integer nonlinear optimization problem. A highly efficient solution strategy is tested, exploiting the separable structure and handling the integer constraints by treating the problem as a masked allocation problem in dynamic programming.
The Trade-Off Between Implicit and Explicit Data Distribution in Shared-Memory Programming Paradigms
Resumo:
Introduction
The use of video capture of lectures in Higher Education is not a recent occurrence with web based learning technologies including digital recording of live lectures becoming increasing commonly offered by universities throughout the world (Holliman and Scanlon, 2004). However in the past decade the increase in technical infrastructural provision including the availability of high speed broadband has increased the potential and use of videoed lecture capture. This had led to a variety of lecture capture formats including pod casting, live streaming or delayed broadcasting of whole or part of lectures.
Additionally in the past five years there has been a significant increase in the popularity of online learning, specifically via Massive Open Online Courses (MOOCs) (Vardi, 2014). One of the key aspects of MOOCs is the simulated recording of lecture like activities. There has been and continues to be much debate on the consequences of the popularity of MOOCs, especially in relation to its potential uses within established University programmes.
There have been a number of studies dedicated to the effects of videoing lectures.
The clustered areas of research in video lecture capture have the following main themes:
• Staff perceptions including attendance, performance of students and staff workload
• Reinforcement versus replacement of lectures
• Improved flexibility of learning
• Facilitating engaging and effective learning experiences
• Student usage, perception and satisfaction
• Facilitating students learning at their own pace
Most of the body of the research has concentrated on student and faculty perceptions, including academic achievement, student attendance and engagement (Johnston et al, 2012).
Generally the research has been positive in review of the benefits of lecture capture for both students and faculty. This perception coupled with technical infrastructure improvements and student demand may well mean that the use of video lecture capture will continue to increase in frequency in the next number of years in tertiary education. However there is a relatively limited amount of research in the effects of lecture capture specifically in the area of computer programming with Watkins 2007 being one of few studies . Video delivery of programming solutions is particularly useful for enabling a lecturer to illustrate the complex decision making processes and iterative nature of the actual code development process (Watkins et al 2007). As such research in this area would appear to be particularly appropriate to help inform debate and future decisions made by policy makers.
Research questions and objectives
The purpose of the research was to investigate how a series of lecture captures (in which the audio of lectures and video of on-screen projected content were recorded) impacted on the delivery and learning of a programme of study in an MSc Software Development course in Queen’s University, Belfast, Northern Ireland. The MSc is conversion programme, intended to take graduates from non-computing primary degrees and upskill them in this area. The research specifically targeted the Java programming module within the course. It also analyses and reports on the empirical data from attendances and various video viewing statistics. In addition, qualitative data was collected from staff and student feedback to help contextualise the quantitative results.
Methodology, Methods and Research Instruments Used
The study was conducted with a cohort of 85 post graduate students taking a compulsory module in Java programming in the first semester of a one year MSc in Software Development. A pre-course survey of students found that 58% preferred to have available videos of “key moments” of lectures rather than whole lectures. A large scale study carried out by Guo concluded that “shorter videos are much more engaging” (Guo 2013). Of concern was the potential for low audience retention for videos of whole lectures.
The lecturers recorded snippets of the lecture directly before or after the actual physical delivery of the lecture, in a quiet environment and then upload the video directly to a closed YouTube channel. These snippets generally concentrated on significant parts of the theory followed by theory related coding demonstration activities and were faithful in replication of the face to face lecture. Generally each lecture was supported by two to three videos of durations ranging from 20 – 30 minutes.
Attendance
The MSc programme has several attendance based modules of which Java Programming was one element. In order to assess the consequence on attendance for the Programming module a control was established. The control used was a Database module which is taken by the same students and runs in the same semester.
Access engagement
The videos were hosted on a closed YouTube channel made available only to the students in the class. The channel had enabled analytics which reported on the following areas for all and for each individual video; views (hits), audience retention, viewing devices / operating systems used and minutes watched.
Student attitudes
Three surveys were taken in regard to investigating student attitudes towards the videoing of lectures. The first was before the start of the programming module, then at the mid-point and subsequently after the programme was complete.
The questions in the first survey were targeted at eliciting student attitudes towards lecture capture before they had experienced it in the programme. The midpoint survey gathered data in relation to how the students were individually using the system up to that point. This included feedback on how many videos an individual had watched, viewing duration, primary reasons for watching and the result on attendance, in addition to probing for comments or suggestions. The final survey on course completion contained questions similar to the midpoint survey but in summative view of the whole video programme.
Conclusions and Outcomes
The study confirmed findings of other such investigations illustrating that there is little or no effect on attendance at lectures. The use of the videos appears to help promote continual learning but they are particularly accessed by students at assessment periods. Students respond positively to the ability to access lectures digitally, as a means of reinforcing learning experiences rather than replacing them. Feedback from students was overwhelmingly positive indicating that the videos benefited their learning. Also there are significant benefits to part recording of lectures rather than recording whole lectures. The behaviour viewing trends analytics suggest that despite the increase in the popularity of online learning via MOOCs and the promotion of video learning on mobile devices in fact in this study the vast majority of students accessed the online videos at home on laptops or desktops However, in part, this is likely due to the nature of the taught subject, that being programming.
The research involved prerecording the lecture in smaller timed units and then uploading for distribution to counteract existing quality issues with recording entire live lectures. However the advancement and consequential improvement in quality of in situ lecture capture equipment may well help negate the need to record elsewhere. The research has also highlighted an area of potentially very significant use for performance analysis and improvement that could have major implications for the quality of teaching. A study of the analytics of the viewings of the videos could well provide a quick response formative feedback mechanism for the lecturer. If a videoed lecture either recorded live or later is a true reflection of the face to face lecture an analysis of the viewing patterns for the video may well reveal trends that correspond with the live delivery.
Resumo:
Tese de doutoramento, Informática (Ciências da Computação), Universidade de Lisboa, Faculdade de Ciências, 2015
Resumo:
Wireless Sensor Networks (WSN) are being used for a number of applications involving infrastructure monitoring, building energy monitoring and industrial sensing. The difficulty of programming individual sensor nodes and the associated overhead have encouraged researchers to design macro-programming systems which can help program the network as a whole or as a combination of subnets. Most of the current macro-programming schemes do not support multiple users seamlessly deploying diverse applications on the same shared sensor network. As WSNs are becoming more common, it is important to provide such support, since it enables higher-level optimizations such as code reuse, energy savings, and traffic reduction. In this paper, we propose a macro-programming framework called Nano-CF, which, in addition to supporting in-network programming, allows multiple applications written by different programmers to be executed simultaneously on a sensor networking infrastructure. This framework enables the use of a common sensing infrastructure for a number of applications without the users having to worrying about the applications already deployed on the network. The framework also supports timing constraints and resource reservations using the Nano-RK operating system. Nano- CF is efficient at improving WSN performance by (a) combining multiple user programs, (b) aggregating packets for data delivery, and (c) satisfying timing and energy specifications using Rate- Harmonized Scheduling. Using representative applications, we demonstrate that Nano-CF achieves 90% reduction in Source Lines-of-Code (SLoC) and 50% energy savings from aggregated data delivery.
Resumo:
Currently, the teaching-learning process in domains, such as computer programming, is characterized by an extensive curricula and a high enrolment of students. This poses a great workload for faculty and teaching assistants responsible for the creation, delivery, and assessment of student exercises. The main goal of this chapter is to foster practice-based learning in complex domains. This objective is attained with an e-learning framework—called Ensemble—as a conceptual tool to organize and facilitate technical interoperability among services. The Ensemble framework is used on a specific domain: computer programming. Content issues are tacked with a standard format to describe programming exercises as learning objects. Communication is achieved with the extension of existing specifications for the interoperation with several systems typically found in an e-learning environment. In order to evaluate the acceptability of the proposed solution, an Ensemble instance was validated on a classroom experiment with encouraging results.