699 resultados para Learning center design
Resumo:
Proceedings of the Advances in Teaching & Learning Day Regional Conference held at The University of Texas Health Science Center at Houston in 2005.
Resumo:
Introduction: According to the American Cancer Society, each day, more than 4,000 teens try cigarettes for the first time, and another 2,000 become daily smokers. One-half of these teens eventually will die from a smoking-related disease. [See PDF for complete abstract]
Resumo:
Hypertutorials optimize five features - presentation, learner control, practice, feedback, and elaborative learning resources. Previous research showed graduate students significantly and overwhelmingly preferred Web-based hypertutorials to conventional "Book-on-the-Web" statistics or research design lessons. The current report shows that the source of hypertutorials' superiority in student evaluations of instruction lies in their hypertutorial features. Randomized comparisons between the two methodologies were conducted in two successive iterations of a graduate level health informatics research design and evaluation course. The two versions contained the same text and graphics, but differed in the presence or absence of hypertutorial features: Elaborative learning resources, practice, feedback, and amount of learner control. Students gave high evaluations to both Web-based methodologies, but consistently rated the hypertutorial lessons as superior. Significant differences localized in the hypertutorial subscale that measured student responses to hypertutorial features.
Resumo:
A wealth of genetic associations for cardiovascular and metabolic phenotypes in humans has been accumulating over the last decade, in particular a large number of loci derived from recent genome wide association studies (GWAS). True complex disease-associated loci often exert modest effects, so their delineation currently requires integration of diverse phenotypic data from large studies to ensure robust meta-analyses. We have designed a gene-centric 50 K single nucleotide polymorphism (SNP) array to assess potentially relevant loci across a range of cardiovascular, metabolic and inflammatory syndromes. The array utilizes a "cosmopolitan" tagging approach to capture the genetic diversity across approximately 2,000 loci in populations represented in the HapMap and SeattleSNPs projects. The array content is informed by GWAS of vascular and inflammatory disease, expression quantitative trait loci implicated in atherosclerosis, pathway based approaches and comprehensive literature searching. The custom flexibility of the array platform facilitated interrogation of loci at differing stringencies, according to a gene prioritization strategy that allows saturation of high priority loci with a greater density of markers than the existing GWAS tools, particularly in African HapMap samples. We also demonstrate that the IBC array can be used to complement GWAS, increasing coverage in high priority CVD-related loci across all major HapMap populations. DNA from over 200,000 extensively phenotyped individuals will be genotyped with this array with a significant portion of the generated data being released into the academic domain facilitating in silico replication attempts, analyses of rare variants and cross-cohort meta-analyses in diverse populations. These datasets will also facilitate more robust secondary analyses, such as explorations with alternative genetic models, epistasis and gene-environment interactions.
Resumo:
Putting it all together: technology design drivers to move your classroom from your campus to the world This workshop will introduce methodologies available for moving the current “classroom” mindset to a learning environment without boundaries. Participants will explore the design drivers necessary to create the technology supports for transcendent learning environments.
Resumo:
Withdrawal reflexes of the mollusk Aplysia exhibit sensitization, a simple form of long-term memory (LTM). Sensitization is due, in part, to long-term facilitation (LTF) of sensorimotor neuron synapses. LTF is induced by the modulatory actions of serotonin (5-HT). Pettigrew et al. developed a computational model of the nonlinear intracellular signaling and gene network that underlies the induction of 5-HT-induced LTF. The model simulated empirical observations that repeated applications of 5-HT induce persistent activation of protein kinase A (PKA) and that this persistent activation requires a suprathreshold exposure of 5-HT. This study extends the analysis of the Pettigrew model by applying bifurcation analysis, singularity theory, and numerical simulation. Using singularity theory, classification diagrams of parameter space were constructed, identifying regions with qualitatively different steady-state behaviors. The graphical representation of these regions illustrates the robustness of these regions to changes in model parameters. Because persistent protein kinase A (PKA) activity correlates with Aplysia LTM, the analysis focuses on a positive feedback loop in the model that tends to maintain PKA activity. In this loop, PKA phosphorylates a transcription factor (TF-1), thereby increasing the expression of an ubiquitin hydrolase (Ap-Uch). Ap-Uch then acts to increase PKA activity, closing the loop. This positive feedback loop manifests multiple, coexisting steady states, or multiplicity, which provides a mechanism for a bistable switch in PKA activity. After the removal of 5-HT, the PKA activity either returns to its basal level (reversible switch) or remains at a high level (irreversible switch). Such an irreversible switch might be a mechanism that contributes to the persistence of LTM. The classification diagrams also identify parameters and processes that might be manipulated, perhaps pharmacologically, to enhance the induction of memory. Rational drug design, to affect complex processes such as memory formation, can benefit from this type of analysis.
Resumo:
An understanding of interruptions in healthcare is important for the design, implementation, and evaluation of health information systems and for the management of clinical workflow and medical errors. The purpose of this study is to identify and classify the types of interruptions experienced by Emergency Department(ED) nurses working in a Level One Trauma Center. This was an observational field study of Registered Nurses (RNs) employed in a Level One Trauma Center using the shadowing method. Results of the study indicate that nurses were both recipients and initiators of interruptions. Telephones, pagers, and face-to-face conversations were the most common sources of interruptions. Unlike other industries, the healthcare community has not systematically studied interruptions in clinical settings to determine and weigh the necessity of the interruption against their sometimes negative results such as medical errors, decreased efficiency, and increased costs. Our study presented here is an initial step to understand the nature, causes, and effects of interruptions, thereby improving both the quality of healthcare and patient safety. We developed an ethnographic data collection technique and a data coding method for the capturing and analysis of interruptions. The interruption data we collected are systematic, comprehensive, and close to exhaustive. They confirmed the findings from earlier studies by other researchers that interruptions are frequent events in critical care and other healthcare settings. We are currently using these data to analyze the workflow dynamics of ED clinicians, to identify the bottlenecks of information flow, and to develop interventions to improve the efficiency of emergency care through the management of interruptions.
Resumo:
Parents of premature infants often receive infant cardiopulmonary resuscitation (CPR) training prior to discharge from the hospital, but one study showed that 27.5% of parents could not demonstrate adequate CPR skills after completing an instructor-led class. We hypothesized that parents who viewed an instructional video on infant CPR before attending the class would perform better on a standardized skills test than parents who attended the class with no preparation. Parents randomized to the intervention (video) group viewed the video within 48 hours of the CPR class. Parents in the control group attended the class with no special preparation. All parents completed the CPR skills checklist test, usually within 7 days after class and before the infant's hospital discharge. The test rated subjects' skills in the areas of assessment, ventilation, and chest compressions; each section was rated as good, fair, or fail. In this pass/fail test, students had to be rated good or fair on all three sections to pass. All 10 subjects in the video group passed the test versus only 9 of 13 in the control group, but this difference was not significant (P = 0.08). However, 8 of 10 (80%) subjects in the video group were rated as good on all three sections, versus only 3 of 13 (18.7%) in the control group, and this was a significant difference (P = 0.012). We conclude that preparation of students using an instructional video prior to infant CPR class is associated with improvement in skills performance as measured by a standardized skills test. Video preparation is relatively inexpensive, eliminates the barrier of reading ability for preparation, and can be done at the convenience of the parent.
Resumo:
OBJECTIVE: To determine whether algorithms developed for the World Wide Web can be applied to the biomedical literature in order to identify articles that are important as well as relevant. DESIGN AND MEASUREMENTS A direct comparison of eight algorithms: simple PubMed queries, clinical queries (sensitive and specific versions), vector cosine comparison, citation count, journal impact factor, PageRank, and machine learning based on polynomial support vector machines. The objective was to prioritize important articles, defined as being included in a pre-existing bibliography of important literature in surgical oncology. RESULTS Citation-based algorithms were more effective than noncitation-based algorithms at identifying important articles. The most effective strategies were simple citation count and PageRank, which on average identified over six important articles in the first 100 results compared to 0.85 for the best noncitation-based algorithm (p < 0.001). The authors saw similar differences between citation-based and noncitation-based algorithms at 10, 20, 50, 200, 500, and 1,000 results (p < 0.001). Citation lag affects performance of PageRank more than simple citation count. However, in spite of citation lag, citation-based algorithms remain more effective than noncitation-based algorithms. CONCLUSION Algorithms that have proved successful on the World Wide Web can be applied to biomedical information retrieval. Citation-based algorithms can help identify important articles within large sets of relevant results. Further studies are needed to determine whether citation-based algorithms can effectively meet actual user information needs.
Resumo:
The "EMR Tutorial" is designed to be a bilingual online physician education environment about electronic medical records. After iterative assessment and redesign, the tutorial was tested in two groups: U.S. physicians and Mexican medical students. Split-plot ANOVA revealed significantly different pre-test scores in the two groups, significant cognitive gains for the two groups overall, and no significant difference in the gains made by the two groups. Users rated the module positively on a satisfaction questionnaire.
Resumo:
An understanding of interruptions in healthcare is important for the design, implementation, and evaluation of health information systems and for the management of clinical workflow and medical errors. The purpose of this study is to identify and classify the types of interruptions experienced by ED nurses working in a Level One Trauma Center. This was an observational field study of Registered Nurses employed in a Level One Trauma Center using the shadowing method. Results of the study indicate that nurses were both recipients and initiators of interruptions. Telephone, pagers, and face-to-face conversations were the most common sources of interruptions. Unlike other industries, the outcomes caused by interruptions resulting in medical errors, decreased efficiency and increased cost have not been systematically studied in healthcare. Our study presented here is an initial step to understand the nature, causes, and effects of interruptions, and to develop interventions to manage interruptions to improve healthcare quality and patient safety. We developed an ethnographic data collection technique and a data coding method for the capturing and analysis of interruptions. The interruption data we collected are systematic, comprehensive, and close to exhaustive. They confirmed the findings from early studies by other researchers that interruptions are frequent events in critical care and other healthcare settings. We are currently using these data to analyze the workflow dynamics of ED clinicians, identify the bottlenecks of information flow, and develop interventions to improve the efficiency of emergency care through the management of interruptions.
Resumo:
Treatment for cancer often involves combination therapies used both in medical practice and clinical trials. Korn and Simon listed three reasons for the utility of combinations: 1) biochemical synergism, 2) differential susceptibility of tumor cells to different agents, and 3) higher achievable dose intensity by exploiting non-overlapping toxicities to the host. Even if the toxicity profile of each agent of a given combination is known, the toxicity profile of the agents used in combination must be established. Thus, caution is required when designing and evaluating trials with combination therapies. Traditional clinical design is based on the consideration of a single drug. However, a trial of drugs in combination requires a dose-selection procedure that is vastly different than that needed for a single-drug trial. When two drugs are combined in a phase I trial, an important trial objective is to determine the maximum tolerated dose (MTD). The MTD is defined as the dose level below the dose at which two of six patients experience drug-related dose-limiting toxicity (DLT). In phase I trials that combine two agents, more than one MTD generally exists, although all are rarely determined. For example, there may be an MTD that includes high doses of drug A with lower doses of drug B, another one for high doses of drug B with lower doses of drug A, and yet another for intermediate doses of both drugs administered together. With classic phase I trial designs, only one MTD is identified. Our new trial design allows identification of more than one MTD efficiently, within the context of a single protocol. The two drugs combined in our phase I trial are temsirolimus and bevacizumab. Bevacizumab is a monoclonal antibody targeting the vascular endothelial growth factor (VEGF) pathway which is fundamental for tumor growth and metastasis. One mechanism of tumor resistance to antiangiogenic therapy is upregulation of hypoxia inducible factor 1α (HIF-1α) which mediates responses to hypoxic conditions. Temsirolimus has resulted in reduced levels of HIF-1α making this an ideal combination therapy. Dr. Donald Berry developed a trial design schema for evaluating low, intermediate and high dose levels of two drugs given in combination as illustrated in a recently published paper in Biometrics entitled “A Parallel Phase I/II Clinical Trial Design for Combination Therapies.” His trial design utilized cytotoxic chemotherapy. We adapted this design schema by incorporating greater numbers of dose levels for each drug. Additional dose levels are being examined because it has been the experience of phase I trials that targeted agents, when given in combination, are often effective at dosing levels lower than the FDA-approved dose of said drugs. A total of thirteen dose levels including representative high, intermediate and low dose levels of temsirolimus with representative high, intermediate, and low dose levels of bevacizumab will be evaluated. We hypothesize that our new trial design will facilitate identification of more than one MTD, if they exist, efficiently and within the context of a single protocol. Doses gleaned from this approach could potentially allow for a more personalized approach in dose selection from among the MTDs obtained that can be based upon a patient’s specific co-morbid conditions or anticipated toxicities.
Resumo:
BACKGROUND: Robotic-assisted laparoscopic surgery (RALS) is evolving as an important surgical approach in the field of colorectal surgery. We aimed to evaluate the learning curve for RALS procedures involving resections of the rectum and rectosigmoid. METHODS: A series of 50 consecutive RALS procedures were performed between August 2008 and September 2009. Data were entered into a retrospective database and later abstracted for analysis. The surgical procedures included abdominoperineal resection (APR), anterior rectosigmoidectomy (AR), low anterior resection (LAR), and rectopexy (RP). Demographic data and intraoperative parameters including docking time (DT), surgeon console time (SCT), and total operative time (OT) were analyzed. The learning curve was evaluated using the cumulative sum (CUSUM) method. RESULTS: The procedures performed for 50 patients (54% male) included 25 AR (50%), 15 LAR (30%), 6 APR (12%), and 4 RP (8%). The mean age of the patients was 54.4 years, the mean BMI was 27.8 kg/m(2), and the median American Society of Anesthesiologists (ASA) classification was 2. The series had a mean DT of 14 min, a mean SCT of 115.1 min, and a mean OT of 246.1 min. The DT and SCT accounted for 6.3% and 46.8% of the OT, respectively. The SCT learning curve was analyzed. The CUSUM(SCT) learning curve was best modeled as a parabola, with equation CUSUM(SCT) in minutes equal to 0.73 × case number(2) - 31.54 × case number - 107.72 (R = 0.93). The learning curve consisted of three unique phases: phase 1 (the initial 15 cases), phase 2 (the middle 10 cases), and phase 3 (the subsequent cases). Phase 1 represented the initial learning curve, which spanned 15 cases. The phase 2 plateau represented increased competence with the robotic technology. Phase 3 was achieved after 25 cases and represented the mastery phase in which more challenging cases were managed. CONCLUSIONS: The three phases identified with CUSUM analysis of surgeon console time represented characteristic stages of the learning curve for robotic colorectal procedures. The data suggest that the learning phase was achieved after 15 to 25 cases.
Resumo:
The ability to represent time is an essential component of cognition but its neural basis is unknown. Although extensively studied both behaviorally and electrophysiologically, a general theoretical framework describing the elementary neural mechanisms used by the brain to learn temporal representations is lacking. It is commonly believed that the underlying cellular mechanisms reside in high order cortical regions but recent studies show sustained neural activity in primary sensory cortices that can represent the timing of expected reward. Here, we show that local cortical networks can learn temporal representations through a simple framework predicated on reward dependent expression of synaptic plasticity. We assert that temporal representations are stored in the lateral synaptic connections between neurons and demonstrate that reward-modulated plasticity is sufficient to learn these representations. We implement our model numerically to explain reward-time learning in the primary visual cortex (V1), demonstrate experimental support, and suggest additional experimentally verifiable predictions.
Resumo:
Spike timing dependent plasticity (STDP) is a phenomenon in which the precise timing of spikes affects the sign and magnitude of changes in synaptic strength. STDP is often interpreted as the comprehensive learning rule for a synapse - the "first law" of synaptic plasticity. This interpretation is made explicit in theoretical models in which the total plasticity produced by complex spike patterns results from a superposition of the effects of all spike pairs. Although such models are appealing for their simplicity, they can fail dramatically. For example, the measured single-spike learning rule between hippocampal CA3 and CA1 pyramidal neurons does not predict the existence of long-term potentiation one of the best-known forms of synaptic plasticity. Layers of complexity have been added to the basic STDP model to repair predictive failures, but they have been outstripped by experimental data. We propose an alternate first law: neural activity triggers changes in key biochemical intermediates, which act as a more direct trigger of plasticity mechanisms. One particularly successful model uses intracellular calcium as the intermediate and can account for many observed properties of bidirectional plasticity. In this formulation, STDP is not itself the basis for explaining other forms of plasticity, but is instead a consequence of changes in the biochemical intermediate, calcium. Eventually a mechanism-based framework for learning rules should include other messengers, discrete change at individual synapses, spread of plasticity among neighboring synapses, and priming of hidden processes that change a synapse's susceptibility to future change. Mechanism-based models provide a rich framework for the computational representation of synaptic plasticity.