4 resultados para User Evaluation

em DigitalCommons@The Texas Medical Center


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Software for use with patient records is challenging to design and difficult to evaluate because of the tremendous variability of patient circumstances. A method was devised by the authors to overcome a number of difficulties. The method evaluates and compares objectively various software products for use in emergency departments and compares software to conventional methods like dictation and templated chart forms. The technique utilizes oral case simulation and video recording for analysis. The methodology and experiences of executing a study using this case simulation are discussed in this presentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The "EMR Tutorial" is designed to be a bilingual online physician education environment about electronic medical records. After iterative assessment and redesign, the tutorial was tested in two groups: U.S. physicians and Mexican medical students. Split-plot ANOVA revealed significantly different pre-test scores in the two groups, significant cognitive gains for the two groups overall, and no significant difference in the gains made by the two groups. Users rated the module positively on a satisfaction questionnaire.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose. To determine the usability of two video games to prevent type 2 diabetes and obesity among youth through analysis of data collected during alpha-testing. ^ Subjects. Ten children aged 9 to 12 were selected for three 2-hour alpha testing sessions.^ Methods. "Escape from Diab" and "Nanoswarm" were designed to change dietary and physical inactivity behaviors, based on a theoretical framework of mediating variables obtained from social cognitive theory, self-determination theory, elaboration likelihood model, and behavioral inoculation theory. Thirteen mini-games developed by the software company were divided into 3 groups based on completion date. Children tested 4-5 mini-games in each of three sessions. Observed game play was followed by a scripted interview. Results from observation forms and interview transcripts were tabulated and coded to determine usability. Suggestions for game modifications were delivered to the software design firm, and a follow-up table reports rationale for inclusion or exclusion of such modifications.^ Results. Participants were 50% frequent video game players and 20% non game-players. Most (60%) were female. The mean grade (indicating likeability as a subset of usability) across all games given by children was significantly greater than a neutral grade of 80% (89%, p < 0.01), indicating a positive likeability score. The games on average also received positive ratings for fun, helpfulness of instructions and length compared to neutral values (midpoint on likert scales) (all p < 0.01). Observation notes indicated that participants paid attention to the instructions, did not appear to have much difficulty with the games, and were "not frustrated", "not bored", "very engaged", "not fidgety" and "very calm" (all p < 0.01). The primary issues noted in observations and interviews were unclear instructions and unclear purpose of some games. Player suggestions primarily involved ways to make on screen cues more visible or noticeable, instructions more clear, and games more elaborate or difficult.^ Conclusions. The present study highlights the importance of alpha testing video game components for usability prior to completion to enhance usability and likeability. Results indicate that creating clear instructions, making peripheral screen cues more eye-catching or noticeable, and vigorously stating the purpose of the game to improve understandability are important elements. However, future interventions will each present unique materials and user-interfaces and should therefore also be thoroughly alpha-tested. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To ensure the integrity of an intensity modulated radiation therapy (IMRT) treatment, each plan must be validated through a measurement-based quality assurance (QA) procedure, known as patient specific IMRT QA. Many methods of measurement and analysis have evolved for this QA. There is not a standard among clinical institutions, and many devices and action levels are used. Since the acceptance criteria determines if the dosimetric tools’ output passes the patient plan, it is important to see how these parameters influence the performance of the QA device. While analyzing the results of IMRT QA, it is important to understand the variability in the measurements. Due to the different form factors of the many QA methods, this reproducibility can be device dependent. These questions of patient-specific IMRT QA reproducibility and performance were investigated across five dosimeter systems: a helical diode array, radiographic film, ion chamber, diode array (AP field-by-field, AP composite, and rotational composite), and an in-house designed multiple ion chamber phantom. The reproducibility was gauged for each device by comparing the coefficients of variation (CV) across six patient plans. The performance of each device was determined by comparing each one’s ability to accurately label a plan as acceptable or unacceptable compared to a gold standard. All methods demonstrated a CV of less than 4%. Film proved to have the highest variability in QA measurement, likely due to the high level of user involvement in the readout and analysis. This is further shown by how the setup contributed more variation than the readout and analysis for all of the methods, except film. When evaluated for ability to correctly label acceptable and unacceptable plans, two distinct performance groups emerged with the helical diode array, AP composite diode array, film, and ion chamber in the better group; and the rotational composite and AP field-by-field diode array in the poorer group. Additionally, optimal threshold cutoffs were determined for each of the dosimetry systems. These findings, combined with practical considerations for factors such as labor and cost, can aid a clinic in its choice of an effective and safe patient-specific IMRT QA implementation.