976 resultados para Introdouctory Programming, Tutoring, Feedback, eLearning, Program Annotations


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated the effectiveness of goal setting and attributional feedback on self-efficacy for writing and writing achievement of students who are gifted underachievers. Students in grades 3, 4 and 5 participated. Five dependent measures were investigated: fluency, syntax, range, diversity and organization. The results indicated that a systematic writing instruction program increased self-efficacy for writing. In addition the self-efficacy strategies of goal setting and attributional feedback improve self-efficacy and increased some areas of writing achievement. The dependent measures most affected were fluency, syntax and organization. The students in this study did not improve their levels of vocabulary. This study included many practical applications for teachers to use in a classroom setting. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Multicultural Communication Bridge Program, an ongoing project at the Broward Correctional Institution, utilizes creative movement, writing, and drawing as treatment modalities with long-term incarcerated women. This type of programming is new in the prison system thus literature and research supporting the outcomes with this population are lacking. Therefore, a qualitative study was conducted to determine the efficacy of the program. Nine inmates, who have been involved in the program for at least one year, were interviewed to gather information about their personal experiences as a result of their participation. Common themes that were noted include an increase in trust, the expression of emotions, an increase in self esteem, and an improvement in interactions with others. These attributes are believed to be beneficial to these women to ensure a successful community reintegration upon their release from prison.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nucleic Acid hairpins have been a subject of study for the last four decades. They are composed of single strand that is

hybridized to itself, and the central section forming an unhybridized loop. In nature, they stabilize single stranded RNA, serve as nucleation

sites for RNA folding, protein recognition signals, mRNA localization and regulation of mRNA degradation. On the other hand,

DNA hairpins in biological contexts have been studied with respect to forming cruciform structures that can regulate gene expression.

The use of DNA hairpins as fuel for synthetic molecular devices, including locomotion, was proposed and experimental demonstrated in 2003. They

were interesting because they bring to the table an on-demand energy/information supply mechanism.

The energy/information is hidden (from hybridization) in the hairpin’s loop, until required.

The energy/information is harnessed by opening the stem region, and exposing the single stranded loop section.

The loop region is now free for possible hybridization and help move the system into a thermodynamically favourable state.

The hidden energy and information coupled with

programmability provides another functionality, of selectively choosing what reactions to hide and

what reactions to allow to proceed, that helps develop a topological sequence of events.

Hairpins have been utilized as a source of fuel for many different DNA devices. In this thesis, we program four different

molecular devices using DNA hairpins, and experimentally validate them in the

laboratory. 1) The first device: A

novel enzyme-free autocatalytic self-replicating system composed entirely of DNA that operates isothermally. 2) The second

device: Time-Responsive Circuits using DNA have two properties: a) asynchronous: the final output is always correct

regardless of differences in the arrival time of different inputs.

b) renewable circuits which can be used multiple times without major degradation of the gate motifs

(so if the inputs change over time, the DNA-based circuit can re-compute the output correctly based on the new inputs).

3) The third device: Activatable tiles are a theoretical extension to the Tile assembly model that enhances

its robustness by protecting the sticky sides of tiles until a tile is partially incorporated into a growing assembly.

4) The fourth device: Controlled Amplification of DNA catalytic system: a device such that the amplification

of the system does not run uncontrollably until the system runs out of fuel, but instead achieves a finite

amount of gain.

Nucleic acid circuits with the ability

to perform complex logic operations have many potential practical applications, for example the ability to achieve point of care diagnostics.

We discuss the designs of our DNA Hairpin molecular devices, the results we have obtained, and the challenges we have overcome

to make these truly functional.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intelligent Tutoring Systems (ITSs) are computerized systems for learning-by-doing. These systems provide students with immediate and customized feedback on learning tasks. An ITS typically consists of several modules that are connected to each other. This research focuses on the distribution of the ITS module that provides expert knowledge services. For the distribution of such an expert knowledge module we need to use an architectural style because this gives a standard interface, which increases the reusability and operability of the expert knowledge module. To provide expert knowledge modules in a distributed way we need to answer the research question: ‘How can we compare and evaluate REST, Web services and Plug-in architectural styles for the distribution of the expert knowledge module in an intelligent tutoring system?’. We present an assessment method for selecting an architectural style. Using the assessment method on three architectural styles, we selected the REST architectural style as the style that best supports the distribution of expert knowledge modules. With this assessment method we also analyzed the trade-offs that come with selecting REST. We present a prototype and architectural views based on REST to demonstrate that the assessment method correctly scores REST as an appropriate architectural style for the distribution of expert knowledge modules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conventional taught learning practices often experience difficulties in keeping students motivated and engaged. Video games, however, are very successful at sustaining high levels of motivation and engagement through a set of tasks for hours without apparent loss of focus. In addition, gamers solve complex problems within a gaming environment without feeling fatigue or frustration, as they would typically do with a comparable learning task. Based on this notion, the academic community is keen on exploring methods that can deliver deep learner engagement and has shown increased interest in adopting gamification – the integration of gaming elements, mechanics, and frameworks into non-game situations and scenarios – as a means to increase student engagement and improve information retention. Its effectiveness when applied to education has been debatable though, as attempts have generally been restricted to one-dimensional approaches such as transposing a trivial reward system onto existing teaching materials and/or assessments. Nevertheless, a gamified, multi-dimensional, problem-based learning approach can yield improved results even when applied to a very complex and traditionally dry task like the teaching of computer programming, as shown in this paper. The presented quasi-experimental study used a combination of instructor feedback, real time sequence of scored quizzes, and live coding to deliver a fully interactive learning experience. More specifically, the “Kahoot!” Classroom Response System (CRS), the classroom version of the TV game show “Who Wants To Be A Millionaire?”, and Codecademy’s interactive platform formed the basis for a learning model which was applied to an entry-level Python programming course. Students were thus allowed to experience multiple interlocking methods similar to those commonly found in a top quality game experience. To assess gamification’s impact on learning, empirical data from the gamified group were compared to those from a control group who was taught through a traditional learning approach, similar to the one which had been used during previous cohorts. Despite this being a relatively small-scale study, the results and findings for a number of key metrics, including attendance, downloading of course material, and final grades, were encouraging and proved that the gamified approach was motivating and enriching for both students and instructors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We explore the relationships between the construction of a work of art and the crafting of a computer program in Java and suggest that the structure of paintings and drawings may be used to teach the fundamental concepts of computer programming. This movement "from Art to Science", using art to drive computing, complements the common use of computing to inform art. We report on initial experiences using this approach with undergraduate and postgraduate students. An embryonic theory of the correspondence between art and computing is presented and a methodology proposed to develop this project further.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performing Macroscopy in Pathology implies to plan and implement methods of selection, description and collection of biological material from human organs and tissues, actively contributing to the clinical pathology analysis by preparing macroscopic report and the collection and identification of fragments, according to the standardized protocols and recognizing the criteria internationally established for determining the prognosis. The Macroscopy in Pathology course is a full year program with theoretical and pratical components taught by Pathologists. It is divided by organ/system surgical pathology into weekly modules and includes a practical "hands-on" component in Pathology Departments. The students are 50 biomedical scientists aged from 22 to 50 years old from all across the country that want to acquire competences in macroscopy. A blended learning strategy was used in order to: give students the opportunity to attend from distance; support the contents, lessons and the interaction with colleagues and teachers; facilitate the formative/summative assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we explain how recursion operators can be used to structure and reason about program semantics within a functional language. In particular, we show how the recursion operator fold can be used to structure denotational semantics, how the dual recursion operator unfold can be used to structure operational semantics, and how algebraic properties of these operators can be used to reason about program semantics. The techniques are explained with the aid of two main examples, the first concerning arithmetic expressions, and the second concerning Milner's concurrent language CCS. The aim of the paper is to give functional programmers new insights into recursion operators, program semantics, and the relationships between them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Maternal obesity has been shown to increase the risk for adverse reproductive health outcomes such as gestational diabetes, hypertension, and preeclampsia. Moreover, several studies have indicated that overnutrition and maternal obesity adversely program the development of offspring by predisposing them to obesity and other chronic diseases later in life. The exact molecular mechanisms leading to developmental programming are not known, but it has recently been suggested that obesity-related low-grade inflammation, gut microbiota and epigenetic gene regulation (in particularly DNA methylation) participate in the developmental programming phenomenon. The aim of this thesis was to evaluate the effect of diet, dietary counseling and probiotic intervention during pregnancy in endorsing favorable developmental programming. The study population consisted of 256 mother-child pairs participating in a prospective, double-blinded dietary counselling and probiotic intervention (Lactobacillus rhamnosus GG and Bifidobacterium lactis Bb12) NAMI (Nutrition, Allergy, Mucosal immunology and Intestinal microbiota) study. Further overweight women were recruited from maternal welfare clinics in the area of Southwest Finland and from the prenatal outpatient clinic at Turku University Hospital. Dietary counseling was aimed to modify women’s dietary intake to comply with the recommended intake for pregnant women. Specifically, counseling aimed to affect the type of fat consumed and to increase the amount of fiber in the women’s diets. Leptin concentration was used as a marker for obesity-related low-grade inflammation, antioxidant vitamin status as an efficiency marker for dietary counselling and epigenetic DNA methylation of obesity related genes as a marker for probiotics influence. Results revealed that dietary intake may modify obesity-associated low-grade inflammation as measured by serum leptin concentration. Specifically, dietary fiber intake may lower leptin concentration in women, whereas the intakes of saturated fatty acids and sucrose have an opposite effect. Neither dietary counselling nor probiotic intervention modified leptin concentration in women, but probiotics tended to increase children’s leptin concentration. Dietary counseling was an efficient tool for improving antioxidant vitamin intake in women, which was reflected in the breast milk vitamin concentration. Probiotic intervention affected DNA methylation of dozens of obesity and weight gain related genes both in women and their children. Altogether these results indicate that dietary components, dietary counseling and probiotic supplementation during pregnancy may modify the intrauterine environment towards favorable developmental programming.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oocytes are arrested for long periods of time in the prophase of the first meiotic division (prophase I). As chromosome condensation poses significant constraints to gene expression, the mechanisms regulating transcriptional activity in the prophase I-arrested oocyte are still not entirely understood. We hypothesized that gene expression during the prophase I arrest is primarily epigenetically regulated. Here we comprehensively define the Drosophila female germ line epigenome throughout oogenesis and show that the oocyte has a unique, dynamic and remarkably diversified epigenome characterized by the presence of both euchromatic and heterochromatic marks. We observed that the perturbation of the oocyte's epigenome in early oogenesis, through depletion of the dKDM5 histone demethylase, results in the temporal deregulation of meiotic transcription and affects female fertility. Taken together, our results indicate that the early programming of the oocyte epigenome primes meiotic chromatin for subsequent functions in late prophase I.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The persistence concern implemented as an aspect has been studied since the appearance of the Aspect-Oriented paradigm. Frequently, persistence is given as an example that can be aspectized, but until today no real world solution has applied that paradigm. Such solution should be able to enhance the programmer productivity and make the application less prone to errors. To test the viability of that concept, in a previous study we developed a prototype that implements Orthogonal Persistence as an aspect. This first version of the prototype was already fully functional with all Java types including arrays. In this work the results of our new research to overcome some limitations that we have identified on the data type abstraction and transparency in the prototype are presented. One of our goals was to avoid the Java standard idiom for genericity, based on casts, type tests and subtyping. Moreover, we also find the need to introduce some dynamic data type abilities. We consider that the Reflection is the solution to those issues. To achieve that, we have extended our prototype with a new static weaver that preprocesses the application source code in order to introduce changes to the normal behavior of the Java compiler with a new generated reflective code.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Applications are subject of a continuous evolution process with a profound impact on their underlining data model, hence requiring frequent updates in the applications' class structure and database structure as well. This twofold problem, schema evolution and instance adaptation, usually known as database evolution, is addressed in this thesis. Additionally, we address concurrency and error recovery problems with a novel meta-model and its aspect-oriented implementation. Modern object-oriented databases provide features that help programmers deal with object persistence, as well as all related problems such as database evolution, concurrency and error handling. In most systems there are transparent mechanisms to address these problems, nonetheless the database evolution problem still requires some human intervention, which consumes much of programmers' and database administrators' work effort. Earlier research works have demonstrated that aspect-oriented programming (AOP) techniques enable the development of flexible and pluggable systems. In these earlier works, the schema evolution and the instance adaptation problems were addressed as database management concerns. However, none of this research was focused on orthogonal persistent systems. We argue that AOP techniques are well suited to address these problems in orthogonal persistent systems. Regarding the concurrency and error recovery, earlier research showed that only syntactic obliviousness between the base program and aspects is possible. Our meta-model and framework follow an aspect-oriented approach focused on the object-oriented orthogonal persistent context. The proposed meta-model is characterized by its simplicity in order to achieve efficient and transparent database evolution mechanisms. Our meta-model supports multiple versions of a class structure by applying a class versioning strategy. Thus, enabling bidirectional application compatibility among versions of each class structure. That is to say, the database structure can be updated because earlier applications continue to work, as well as later applications that have only known the updated class structure. The specific characteristics of orthogonal persistent systems, as well as a metadata enrichment strategy within the application's source code, complete the inception of the meta-model and have motivated our research work. To test the feasibility of the approach, a prototype was developed. Our prototype is a framework that mediates the interaction between applications and the database, providing them with orthogonal persistence mechanisms. These mechanisms are introduced into applications as an {\it aspect} in the aspect-oriented sense. Objects do not require the extension of any super class, the implementation of an interface nor contain a particular annotation. Parametric type classes are also correctly handled by our framework. However, classes that belong to the programming environment must not be handled as versionable due to restrictions imposed by the Java Virtual Machine. Regarding concurrency support, the framework provides the applications with a multithreaded environment which supports database transactions and error recovery. The framework keeps applications oblivious to the database evolution problem, as well as persistence. Programmers can update the applications' class structure because the framework will produce a new version for it at the database metadata layer. Using our XML based pointcut/advice constructs, the framework's instance adaptation mechanism is extended, hence keeping the framework also oblivious to this problem. The potential developing gains provided by the prototype were benchmarked. In our case study, the results confirm that mechanisms' transparency has positive repercussions on the programmer's productivity, simplifying the entire evolution process at application and database levels. The meta-model itself also was benchmarked in terms of complexity and agility. Compared with other meta-models, it requires less meta-object modifications in each schema evolution step. Other types of tests were carried out in order to validate prototype and meta-model robustness. In order to perform these tests, we used an OO7 small size database due to its data model complexity. Since the developed prototype offers some features that were not observed in other known systems, performance benchmarks were not possible. However, the developed benchmark is now available to perform future performance comparisons with equivalent systems. In order to test our approach in a real world scenario, we developed a proof-of-concept application. This application was developed without any persistence mechanisms. Using our framework and minor changes applied to the application's source code, we added these mechanisms. Furthermore, we tested the application in a schema evolution scenario. This real world experience using our framework showed that applications remains oblivious to persistence and database evolution. In this case study, our framework proved to be a useful tool for programmers and database administrators. Performance issues and the single Java Virtual Machine concurrent model are the major limitations found in the framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Processors with large numbers of cores are becoming commonplace. In order to utilise the available resources in such systems, the programming paradigm has to move towards increased parallelism. However, increased parallelism does not necessarily lead to better performance. Parallel programming models have to provide not only flexible ways of defining parallel tasks, but also efficient methods to manage the created tasks. Moreover, in a general-purpose system, applications residing in the system compete for the shared resources. Thread and task scheduling in such a multiprogrammed multithreaded environment is a significant challenge. In this thesis, we introduce a new task-based parallel reduction model, called the Glasgow Parallel Reduction Machine (GPRM). Our main objective is to provide high performance while maintaining ease of programming. GPRM supports native parallelism; it provides a modular way of expressing parallel tasks and the communication patterns between them. Compiling a GPRM program results in an Intermediate Representation (IR) containing useful information about tasks, their dependencies, as well as the initial mapping information. This compile-time information helps reduce the overhead of runtime task scheduling and is key to high performance. Generally speaking, the granularity and the number of tasks are major factors in achieving high performance. These factors are even more important in the case of GPRM, as it is highly dependent on tasks, rather than threads. We use three basic benchmarks to provide a detailed comparison of GPRM with Intel OpenMP, Cilk Plus, and Threading Building Blocks (TBB) on the Intel Xeon Phi, and with GNU OpenMP on the Tilera TILEPro64. GPRM shows superior performance in almost all cases, only by controlling the number of tasks. GPRM also provides a low-overhead mechanism, called “Global Sharing”, which improves performance in multiprogramming situations. We use OpenMP, as the most popular model for shared-memory parallel programming as the main GPRM competitor for solving three well-known problems on both platforms: LU factorisation of Sparse Matrices, Image Convolution, and Linked List Processing. We focus on proposing solutions that best fit into the GPRM’s model of execution. GPRM outperforms OpenMP in all cases on the TILEPro64. On the Xeon Phi, our solution for the LU Factorisation results in notable performance improvement for sparse matrices with large numbers of small blocks. We investigate the overhead of GPRM’s task creation and distribution for very short computations using the Image Convolution benchmark. We show that this overhead can be mitigated by combining smaller tasks into larger ones. As a result, GPRM can outperform OpenMP for convolving large 2D matrices on the Xeon Phi. Finally, we demonstrate that our parallel worksharing construct provides an efficient solution for Linked List processing and performs better than OpenMP implementations on the Xeon Phi. The results are very promising, as they verify that our parallel programming framework for manycore processors is flexible and scalable, and can provide high performance without sacrificing productivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A correct understanding about how computers run code is mandatory in order to effectively learn to program. Lectures have historically been used in programming courses to teach how computers execute code, and students are assessed through traditional evaluation methods, such as exams. Constructivism learning theory objects to students’ passiveness during lessons, and traditional quantitative methods for evaluating a complex cognitive process such as understanding. Constructivism proposes complimentary techniques, such as conceptual contraposition and colloquies. We enriched lectures of a “Programming II” (CS2) course combining conceptual contraposition with program memory tracing, then we evaluated students’ understanding of programming concepts through colloquies. Results revealed that these techniques applied to the lecture are insufficient to help students develop satisfactory mental models of the C++ notional machine, and colloquies behaved as the most comprehensive traditional evaluations conducted in the course.