980 resultados para open course
Resumo:
© Comer, Clark, Canelas.This study aimed to evaluate how peer-to-peer interactions through writing impact student learning in introductory-level massive open online courses (MOOCs) across disciplines. This article presents the results of a qualitative coding analysis of peer-to-peer interactions in two introductory level MOOCs: English Composition I: Achieving Expertise and Introduction to Chemistry. Results indicate that peer-to-peer interactions in writing through the forums and through peer assessment enhance learner understanding, link to course learning objectives, and generally contribute positively to the learning environment. Moreover, because forum interactions and peer review occur in written form, our research contributes to open distance learning (ODL) scholarship by highlighting the importance of writing to learn as a significant pedagogical practice that should be encouraged more in MOOCs across disciplines.
Resumo:
Given a probability distribution on an open book (a metric space obtained by gluing a disjoint union of copies of a half-space along their boundary hyperplanes), we define a precise concept of when the Fréchet mean (barycenter) is sticky. This nonclassical phenomenon is quantified by a law of large numbers (LLN) stating that the empirical mean eventually almost surely lies on the (codimension 1 and hence measure 0) spine that is the glued hyperplane, and a central limit theorem (CLT) stating that the limiting distribution is Gaussian and supported on the spine.We also state versions of the LLN and CLT for the cases where the mean is nonsticky (i.e., not lying on the spine) and partly sticky (i.e., is, on the spine but not sticky). © Institute of Mathematical Statistics, 2013.
Resumo:
We examined the frequency and impact of exposure to potentially traumatic events among a nonclinical sample of older adults (n = 3,575), a population typically underrepresented in epidemiological research concerning the prevalence of traumatic events. Current PTSD symptom severity and the centrality of events to identity were assessed for events nominated as currently most distressing. Approximately 90% of participants experienced one or more potentially traumatic events. Events that occurred with greater frequency early in the life course were associated with more severe PTSD symptoms compared to events that occurred with greater frequency during later decades. Early life traumas, however, were not more central to identity. Results underscore the differential impact of traumatic events experienced throughout the life course. We conclude with suggestions for further research concerning mechanisms that promote the persistence of post-traumatic stress related to early life traumas and empirical evaluation of psychotherapeutic treatments for older adults with PTSD.
Resumo:
BACKGROUND: Lumbar disc herniation has a prevalence of up to 58% in the athletic population. Lumbar discectomy is a common surgical procedure to alleviate pain and disability in athletes. We systematically reviewed the current clinical evidence regarding athlete return to sport (RTS) following lumbar discectomy compared to conservative treatment. METHODS: A computer-assisted literature search of MEDLINE, CINAHL, Web of Science, PEDro, OVID and PubMed databases (from inception to August 2015) was utilised using keywords related to lumbar disc herniation and surgery. The design of this systematic review was developed using the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Methodological quality of individual studies was assessed using the Downs and Black scale (0-16 points). RESULTS: The search strategy revealed 14 articles. Downs and Black quality scores were generally low with no articles in this review earning a high-quality rating, only 5 articles earning a moderate quality rating and 9 of the 14 articles earning a low-quality rating. The pooled RTS for surgical intervention of all included studies was 81% (95% CI 76% to 86%) with significant heterogeneity (I(2)=63.4%, p<0.001) although pooled estimates report only 59% RTS at same level. Pooled analysis showed no difference in RTS rate between surgical (84% (95% CI 77% to 90%)) and conservative intervention (76% (95% CI 56% to 92%); p=0.33). CONCLUSIONS: Studies comparing surgical versus conservative treatment found no significant difference between groups regarding RTS. Not all athletes that RTS return at the level of participation they performed at prior to surgery. Owing to the heterogeneity and low methodological quality of included studies, rates of RTS cannot be accurately determined.
Resumo:
The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.
Resumo:
This chapter presents a model averaging approach in the M-open setting using sample re-use methods to approximate the predictive distribution of future observations. It first reviews the standard M-closed Bayesian Model Averaging approach and decision-theoretic methods for producing inferences and decisions. It then reviews model selection from the M-complete and M-open perspectives, before formulating a Bayesian solution to model averaging in the M-open perspective. It constructs optimal weights for MOMA:M-open Model Averaging using a decision-theoretic framework, where models are treated as part of the ‘action space’ rather than unknown states of nature. Using ‘incompatible’ retrospective and prospective models for data from a case-control study, the chapter demonstrates that MOMA gives better predictive accuracy than the proxy models. It concludes with open questions and future directions.
Resumo:
Clinical Trial
Resumo:
"Im Zuge der weiteren Verbreitung der Social Media und der internetbasierten Lehre, gewinnen eLearning Inhalte immer mehr an Bedeutung. In den Kontext von eLearning und internetbasierter Lehre gehören auch Open Educational Resources (OER). OER sind digitale Lern- und Lehrmaterialien, die frei für Lehrende und Studierende zugänglich sind und auch frei verbreitet werden dürfen. [...] Um OER auszutauschen, zu finden, zu beschaffen und sie auf einer breiten Basis zugänglich zu machen, insbesondere auch über Suchmaschinen und dadurch verwenden zu können, werden für die jeweiligen Materialien Metadaten benötigt. [...] Um die Frage nach dem Handlungs- und Forschungsbedarf zum Thema Metadaten für Open Educational Resources zu untersuchen, wird zunächst ein Überblick über die momentan bestehenden nationalen und internationalen Metadatenstandards für eLearning Objekte gegeben. [...] Hieraus ergeben sich Empfehlungen, welche Metadaten-Standards für die weitere Nutzung und Förderung geeignet sein könnten. Es werden außerdem die Möglichkeiten der Erstellung eines neuen Metadaten-Standards sowie eines gemeinsamen Portals für OER erörtert. Hierbei wird vor allem auf die zu erwartenden Probleme und die damit verbundenen Anforderungen eingegangen." (DIPF/Orig.)
Resumo:
The paper considers the open shop scheduling problem to minimize the make-span, provided that one of the machines has to process the jobs according to a given sequence. We show that in the preemptive case the problem is polynomially solvable for an arbitrary number of machines. If preemption is not allowed, the problem is NP-hard in the strong sense if the number of machines is variable, and is NP-hard in the ordinary sense in the case of two machines. For the latter case we give a heuristic algorithm that runs in linear time and produces a schedule with the makespan that is at most 5/4 times the optimal value. We also show that the two-machine problem in the nonpreemptive case is solvable in pseudopolynomial time by a dynamic programming algorithm, and that the algorithm can be converted into a fully polynomial approximation scheme. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 705–731, 1998
Resumo:
The paper presents an improved version of the greedy open shop approximation algorithm with pre-ordering of jobs. It is shown that the algorithm compares favorably with the greedy algorithm with no pre-ordering by reducing either its absolute or relative error. In the case of three machines, the new algorithm creates a schedule with the makespan that is at most 3/2 times the optimal value.
Resumo:
This paper considers the problem of minimizing the schedule length of a two-machine shop in which not only can a job be assigned any of the two possible routes, but also the processing times depend on the chosen route. This problem is known to be NP-hard. We describe a simple approximation algorithm that guarantees a worst-case performance ratio of 2. We also present some modifications to this algorithm that improve its performance and guarantee a worst-case performance ratio of 3=2.
Resumo:
The paper considers the three‐machine open shop scheduling problem to minimize themakespan. It is assumed that each job consists of at most two operations, one of which is tobe processed on the bottleneck machine, the same for all jobs. A new lower bound on theoptimal makespan is derived, and a linear‐time algorithm for finding an optimalnon‐preemptive schedule is presented.
Resumo:
Little attention has been given to the relation between fever and the severity of bronchiolitis. Therefore, the relation between fever and the clinical course of 90 infants (59 boys, 31 girls) hospitalised during one season with bronchiolitis was studied prospectively. Fever (defined as a single recording > 38.0°C or two successive recording > 37.8°C) was present in 28 infants. These infants were older (mean age, 5.3 v 4.0 months), had a longer mean hospital stay (4.2 v2.7 days), and a more severe clinical course (71.0%v 29.0%) than those infants without fever. Radiological abnormalities (collapse/consolidation) were found in 60.7% of the febrile group compared with 14.8% of the afebrile infants. These results suggest that monitoring of body temperature is important in bronchiolitis and that fever is likely to be associated with a more severe clinical course and radiological abnormalities.
Resumo:
This paper studies the problem of scheduling jobs in a two-machine open shop to minimize the makespan. Jobs are grouped into batches and are processed without preemption. A batch setup time on each machine is required before the first job is processed, and when a machine switches from processing a job in some batch to a job of another batch. For this NP-hard problem, we propose a linear-time heuristic algorithm that creates a group technology schedule, in which no batch is split into sub-batches. We demonstrate that our heuristic is a -approximation algorithm. Moreover, we show that no group technology algorithm can guarantee a worst-case performance ratio less than 5/4.