8 resultados para Safe to Learn (Project : Ill.).
em Duke University
Resumo:
© Comer, Clark, Canelas.This study aimed to evaluate how peer-to-peer interactions through writing impact student learning in introductory-level massive open online courses (MOOCs) across disciplines. This article presents the results of a qualitative coding analysis of peer-to-peer interactions in two introductory level MOOCs: English Composition I: Achieving Expertise and Introduction to Chemistry. Results indicate that peer-to-peer interactions in writing through the forums and through peer assessment enhance learner understanding, link to course learning objectives, and generally contribute positively to the learning environment. Moreover, because forum interactions and peer review occur in written form, our research contributes to open distance learning (ODL) scholarship by highlighting the importance of writing to learn as a significant pedagogical practice that should be encouraged more in MOOCs across disciplines.
Resumo:
Duke Medicine utilized interprofessional case conferences (ICCs) from 2008-2012 with the objective of modeling and facilitating development of teamwork skills among diverse health profession students, including physical therapy, physician assistant, medical doctor and nursing. The purpose of this publication was to describe the operational process used to develop and implement the ICCs and measure the success of the ICCs in order to shape future work. The ICCs were offered to develop skills and attitudes essential for participation in healthcare teams. Students were facilitated by faculty of different professions to conduct a comprehensive historical assessment of a standardized patient (SP), determine pertinent physical and lab assessments to undertake, and develop and share a comprehensive management plan. Cases included patient problems that were authentic and relevant to each professional student in attendance. The main barriers to implementation are outlined and the focus on the process of working together is highlighted. Evaluation showed high satisfaction rates among participants and the outcomes from these experiences are presented. The limitations of these results are discussed and recommendations for future assessment are emphasized. The ICCs demonstrated that students will come together voluntarily to learn in teams, even at a research-focused institution, and express benefit from the collaborative exercise.
Resumo:
PURPOSE: The readiness assurance process (RAP) of team-based learning (TBL) is an important element that ensures that students come prepared to learn. However, the RAP can use a significant amount of class time which could otherwise be used for application exercises. The authors administered the TBL-associated RAP in class or individual readiness assurance tests (iRATs) at home to compare medical student performance and learning preference for physiology content. METHODS: Using cross-over study design, the first year medical student TBL teams were divided into two groups. One group was administered iRATs and group readiness assurance tests (gRATs) consisting of physiology questions during scheduled class time. The other group was administered the same iRAT questions at home, and did not complete a gRAT. To compare effectiveness of the two administration methods, both groups completed the same 12-question physiology assessment during dedicated class time. Four weeks later, the entire process was repeated, with each group administered the RAP using the opposite method. RESULTS: The performance on the physiology assessment after at-home administration of the iRAT was equivalent to performance after traditional in-class administration of the RAP. In addition, a majority of students preferred the at-home method of administration and reported that the at-home method was more effective in helping them learn course content. CONCLUSION: The at-home administration of the iRAT proved effective. The at-home administration method is a promising alternative to conventional iRATs and gRATs with the goal of preserving valuable in-class time for TBL application exercises.
Resumo:
Secure Access For Everyone (SAFE), is an integrated system for managing trust
using a logic-based declarative language. Logical trust systems authorize each
request by constructing a proof from a context---a set of authenticated logic
statements representing credentials and policies issued by various principals
in a networked system. A key barrier to practical use of logical trust systems
is the problem of managing proof contexts: identifying, validating, and
assembling the credentials and policies that are relevant to each trust
decision.
SAFE addresses this challenge by (i) proposing a distributed authenticated data
repository for storing the credentials and policies; (ii) introducing a
programmable credential discovery and assembly layer that generates the
appropriate tailored context for a given request. The authenticated data
repository is built upon a scalable key-value store with its contents named by
secure identifiers and certified by the issuing principal. The SAFE language
provides scripting primitives to generate and organize logic sets representing
credentials and policies, materialize the logic sets as certificates, and link
them to reflect delegation patterns in the application. The authorizer fetches
the logic sets on demand, then validates and caches them locally for further
use. Upon each request, the authorizer constructs the tailored proof context
and provides it to the SAFE inference for certified validation.
Delegation-driven credential linking with certified data distribution provides
flexible and dynamic policy control enabling security and trust infrastructure
to be agile, while addressing the perennial problems related to today's
certificate infrastructure: automated credential discovery, scalable
revocation, and issuing credentials without relying on centralized authority.
We envision SAFE as a new foundation for building secure network systems. We
used SAFE to build secure services based on case studies drawn from practice:</p>
(i) a secure name service resolver similar to DNS that resolves a name across
multi-domain federated systems; (ii) a secure proxy shim to delegate access
control decisions in a key-value store; (iii) an authorization module for a
networked infrastructure-as-a-service system with a federated trust structure
(NSF GENI initiative); and (iv) a secure cooperative data analytics service
that adheres to individual secrecy constraints while disclosing the data. We
present empirical evaluation based on these case studies and demonstrate that
SAFE supports a wide range of applications with low overhead.
Resumo:
Insecticide treated bed nets and indoor residual spraying are the most widely used vector control methods in Africa. The World Health Organization now recommends four classes of insecticides for use against adult mosquitoes in public health programs. Of these four classes of insecticides, pyrethroids have become the insecticides of choice in treating mosquito bed nets and in the use of indoor spraying to prevent malaria transmission. Pyrethroids are not only used in malaria control but also in agriculture to protect against pest insects. This concurrent use of pyrethroids in vector control and protection of crops from pests in agriculture may exert selection pressure on mosquito larval population and induce resistance to this class of insecticides. The main objective of our study was to explore the role of agricultural chemicals and the response of mosquitoes to pyrethroids in an area of high malaria transmission.
We used a cross-sectional study design. This was a two-step study involving both mosquitoes and human subjects. In this study, we collected larvae growing in breeding sites affected by different agricultural practices. We used purposive sampling to identify active mosquito breeding sites and then interviewed households adjacent to those breeding sites to learn about their agricultural practices that might influence the response of mosquitoes to pyrethroids. We also performed secondary analysis of larval data from a previous case-control study by Obala et al.
Resumo:
Spectral CT using a photon counting x-ray detector (PCXD) shows great potential for measuring material composition based on energy dependent x-ray attenuation. Spectral CT is especially suited for imaging with K-edge contrast agents to address the otherwise limited contrast in soft tissues. We have developed a micro-CT system based on a PCXD. This system enables full spectrum CT in which the energy thresholds of the PCXD are swept to sample the full energy spectrum for each detector element and projection angle. Measurements provided by the PCXD, however, are distorted due to undesirable physical eects in the detector and are very noisy due to photon starvation. In this work, we proposed two methods based on machine learning to address the spectral distortion issue and to improve the material decomposition. This rst approach is to model distortions using an articial neural network (ANN) and compensate for the distortion in a statistical reconstruction. The second approach is to directly correct for the distortion in the projections. Both technique can be done as a calibration process where the neural network can be trained using 3D printed phantoms data to learn the distortion model or the correction model of the spectral distortion. This replaces the need for synchrotron measurements required in conventional technique to derive the distortion model parametrically which could be costly and time consuming. The results demonstrate experimental feasibility and potential advantages of ANN-based distortion modeling and correction for more accurate K-edge imaging with a PCXD. Given the computational eciency with which the ANN can be applied to projection data, the proposed scheme can be readily integrated into existing CT reconstruction pipelines.
Resumo:
Distributed Computing frameworks belong to a class of programming models that allow developers to
launch workloads on large clusters of machines. Due to the dramatic increase in the volume of
data gathered by ubiquitous computing devices, data analytic workloads have become a common
case among distributed computing applications, making Data Science an entire field of
Computer Science. We argue that Data Scientist's concern lays in three main components: a dataset,
a sequence of operations they wish to apply on this dataset, and some constraint they may have
related to their work (performances, QoS, budget, etc). However, it is actually extremely
difficult, without domain expertise, to perform data science. One need to select the right amount
and type of resources, pick up a framework, and configure it. Also, users are often running their
application in shared environments, ruled by schedulers expecting them to specify precisely their resource
needs. Inherent to the distributed and concurrent nature of the cited frameworks, monitoring and
profiling are hard, high dimensional problems that block users from making the right
configuration choices and determining the right amount of resources they need. Paradoxically, the
system is gathering a large amount of monitoring data at runtime, which remains unused.
In the ideal abstraction we envision for data scientists, the system is adaptive, able to exploit
monitoring data to learn about workloads, and process user requests into a tailored execution
context. In this work, we study different techniques that have been used to make steps toward
such system awareness, and explore a new way to do so by implementing machine learning
techniques to recommend a specific subset of system configurations for Apache Spark applications.
Furthermore, we present an in depth study of Apache Spark executors configuration, which highlight
the complexity in choosing the best one for a given workload.
Resumo:
Current state of the art techniques for landmine detection in ground penetrating radar (GPR) utilize statistical methods to identify characteristics of a landmine response. This research makes use of 2-D slices of data in which subsurface landmine responses have hyperbolic shapes. Various methods from the field of visual image processing are adapted to the 2-D GPR data, producing superior landmine detection results. This research goes on to develop a physics-based GPR augmentation method motivated by current advances in visual object detection. This GPR specific augmentation is used to mitigate issues caused by insufficient training sets. This work shows that augmentation improves detection performance under training conditions that are normally very difficult. Finally, this work introduces the use of convolutional neural networks as a method to learn feature extraction parameters. These learned convolutional features outperform hand-designed features in GPR detection tasks. This work presents a number of methods, both borrowed from and motivated by the substantial work in visual image processing. The methods developed and presented in this work show an improvement in overall detection performance and introduce a method to improve the robustness of statistical classification.