980 resultados para Student evaluation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study measures the evaluation of teaching given by students against their final outcomes in a subject. The subject in question had an enrolment across four campuses of 1073 students at the time of the evaluation and is a statistics subject that is core (i.e. compulsory) to several undergraduate business degrees. This study is based on the 373 students (34.8%) who responded to the survey, and their final results. The evaluations were open for a period of six weeks leading up to and just after the final exam. The study matches the responses to the question “This unit was well taught” to final outcomes, in an attempt to ascertain whether there is a link between student evaluation of teaching and performance. The analysis showed that for the students who self -selected to complete the survey:

· Students who perform well in the subject generally give higher scores than lower performing students.

· The same general pattern prevailed when other secondary factors were taken into account, such as, when the evaluation was completed, campus and gender.

· The timing of when a student completes the evaluation seems the most important of these secondary variables.

· In general, students who submitted their evaluations after the exam gave higher ratings if they eventually obtained a pass grade or better, and lower grades if they failed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Student evaluation of teaching (SET) is now commonplace in many universities internationally. The most common criticism of SET practices is that they are influenced by a number of non-teaching-related factors. More recently, there has been dramatic growth in online education internationally, but only limited research on the use of SET to evaluate online teaching. This paper presents a large-scale and detailed investigation, using the institutional SET data from an Australian university with a significant offering of wholly online units, and whose institutional SET instrument contains items relating to student perceptions of online technologies in teaching and learning. The relationship between educational technology and SET is not neutral. The mean ratings for the ‗online‘ aspects of SET are influenced by factors in the wider teaching and learning environment, and the overall perception of teaching quality is influenced by whether a unit is offered in wholly online mode or not.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dataset consists of a summary of publicly available student evaluation of teaching (SET) data for the annual period of trimester 2 2009, trimester 3 2009/2010 and trimester 1 2010, from the Student Evaluation of Teaching and Units (SETU) portal.

The data was analysed to include mean rating sets for 1432 units of study, and represented 74498 sets of SETU ratings, 188391 individual student enrolements and 58.5 percent of all units listed in the Deakin University handbook for the period under consideration, to identify any systematic influences on SET results at Deakin University.

The data reported for a unit included:
• total enrolment;
• total number of responses; and
• computed response rate for the enrolment location(s) selected

And, the data reported for each of the ten core SETU items included:
• number of responses;
• mean rating;
• standard deviation of the mean rating;
• percentage agreement;
• percentage disagreement; and
• percentage difference.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large surveys of library user service quality perception are common. However, student evaluation of teaching (SET) data often show a disparity between ratings of library service quality and library resource quality. In this situation, perhaps SET data can also provide insights into what contributes to the perception of library resource quality, and hence identify leverage points for quality improvement interventions. This paper documents an analysis of available Deakin University SET data relating to student interaction with, and evaluation of, library resources. It highlights significant correlations associated with library-related SET items, and from them infers actions that the library could undertake to improve the value and perception of the quality of library resources. The following results were observed. High ratings for library resources were likely to be associated with high general ratings of teaching and unit quality. Postgraduate coursework students rated library resources significantly higher than students in the first three years of undergraduate programs. Students in one faculty (Health) rated library resources significantly higher than students in all other faculties. There was a strong correlation observed in Australasian Survey of Student Engagement data for both 2009 and 2010 between the two items “Used library resources on campus or online” and “Worked on an essay or assignment that required integrating ideas or information from various sources”. These findings suggest the following conclusions. Well-planned learning environments are likely to integrate meaningful student interaction with the library. Initiatives to improve the value and perception of the quality of library resources should be focussed on the specific characteristics and needs of particular student cohorts to have maximum impact. More sophisticated assessment tasks that require students to interact with the library have the potential to result in higher student ratings of the value of library resources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Student evaluation of teaching is commonplace in many universities and may be the predominant input into the performance evaluation of staff and organisational units. This article used publicly available student evaluation of teaching data to present examples of where institutional responses to evaluation processes appeared to be educationally ineffective and where the pursuit of the ‘right’ student evaluation results appears to have been mistakenly equated with the aim of improved teaching and learning. If the vast resources devoted to student evaluation of teaching are to be effective, then the data produced by student evaluation systems must lead to real and sustainable improvements in teaching quality and student learning, rather than becoming an end in itself.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Student evaluation of teaching (SET) is now commonplace in many universities internationally. While much effort has been devoted to examining the statistical validity of SET instruments, there has been limited examination of the methodological and consequential validity (together referred to as ‘utility’) of the ways in which SET data are used. This paper examines the SET system at Deakin University from the perspective of utility. It draws on publicly available SET results for an entire annual cycle of unit offerings. Consideration is given to the representativeness of the data produced, and to the utility of the data reported, by the system. While this investigation focuses on the SET system currently employed at Deakin University, it offers both an analysis methodology and conclusions that can be applied more generally.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on student evaluation of teaching (SET) ratings from 1,432 units of study over a period of a year, representing 74,490 individual sets of ratings, and including a significant number of units offered in wholly online mode, we confirm the significant influence of class size, year level, and discipline area on at least some SET ratings. We also find online mode of offer to significantly influence at least some SET ratings. We reveal both the statistical significance and effect sizes of these influences, and find that the magnitudes of the effect sizes of all factors are small, but potentially cumulative. We also show that the influence of online mode of offer is of the same magnitude as the other 3 factors. These results support and extend the rating interpretation guides (RIGs) model proposed by Neumann and colleagues, and we present a general method for the development of a RIGs system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Student evaluation of teaching (SET) is important, commonplace and may be used in staff performance management. The SET literature suggests that class size is a negative systematic influence on SET ratings. In this paper we investigate time-series SET data from a large first-year engineering class where a decline in SET ratings was observed over time as course enrolment increased. We observe a negative halo effect of increasing class size on mean SET ratings and conclude that increasing course enrolment leads to a significant reduction in all mean SET ratings, even when the course learning design remains essentially unchanged. We also find an additional differential effect of increasing course enrolment on mean SET ratings. We observe that the marginal reduction in mean SET ratings for each additional student in the course enrolment is greater for those aspects of the student learning experience that are likely to be most directly impacted by increasing class size. We provide implications for practice from these findings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND Student evaluation of teaching (SET) has a long history, has grown in prevalence and importance over a period of decades, and is now common-place in many universities internationally. SET data are collected for a range of purposes, including: as diagnostic feedback to improve the quality of teaching and learning; as an input to staff performance management processes and personnel decisions such as promotion for staff; to provide information to prospective students in their selection of courses and programs; and as a source of data for research on teaching. Rovai et al. (2006) report that while SET research provides mixed results, there is evidence that, for course-related factors, smaller classes are rated more favourably than large classes, upper-year-level classes are rated more favourably than lower-year classes, and that there are rating differences between discipline areas. While additional course-related factors are also noted, other reviews of the literature on SET also identify these three factors as commonly reported systematic influences on SET ratings. The School of Engineering at Deakin University in Australia offers undergraduate and postgraduate engineering programs, and these programs are delivered in both on-campus and off-campus modes.PURPOSEThe paper presents a quantitative investigation of SET data for the School of Engineering at Deakin University to identify whether the commonly reported systematic influences on SET ratings of class size and year level are also observed here. The influence of online mode of offer is also explored.DESIGN/METHOD Deakin University’s Student Evaluation of Teaching and Units (SETU) questionnaire is administered to students enrolled in every unit of study every time that unit is offered, unless it is specially exempted. Following data collation, summary results are reported via a public website. The publicly available SETU data for all School of Engineering units of study were collected for a two year period. The collected data were subjected to analysis of variance (ANOVA) analysis to identify any significant systematic influences on mean student SETU ratings.RESULTS SETU data from 100 separate units of study over the two year period were collected, representing 3375 sets of SETU ratings, and covering unit enrolment sizes from 12 to 462 students. Although this was a modest sized investigation, significantly higher mean ratings for some SETU items were observed for units with small enrolments, for postgraduate level units compared to undergraduate level units, and for units offered in conventional mode compared to online mode of offer. The presence of the commonly observed systematic influences on SET ratings was confirmed.CONCLUSIONS While the use of SET data may have originally been primarily for formative purposes to improve teaching and learning, they are also increasingly used for summative judgements of teaching quality and teaching staff performance that have implications for personnel decision making. There may be an acceptance of the need for SET, however there remains no universal consensus as to what constitutes quality in university teaching and learning, and the increasing use of SET for high-stakes decision making puts pressure on institutions to ensure that their SET practices are sound, equitable and defensible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Documents pertaining to the organization of the College of Medicine, Medical Education, the Office of Student Affairs, requirements for acceptance into the College, and other documents related to the College of Medicine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to (a) develop an evaluation instrument capable of rating students' perceptions of the instructional quality of an online course and the instructor’s performance, and (b) validate the proposed instrument with a study conducted at a major public university. The instrument was based upon the Seven Principles of Good Practice for Undergraduate Education (Chickering & Gamson, 1987). The study examined four specific questions. 1. Is the underlying factor structure of the new instrument consistent with Chickering and Gamson's Seven Principles? 2. Is the factor structure of the new instrument invariant for male and female students? 3. Are the scores on the new instrument related students’ expected grades? 4. Are the scores on the new instrument related to the students' perceived course workload? ^ The instrument was designed to measure students’ levels of satisfaction with their instruction, and also gathered information concerning the students’ sex, the expected grade in the course, and the students’ perceptions of the amount of work required by the course. A cluster sample consisting of an array of online courses across the disciplines yielded a total 297 students who responded to the online survey. The students for each course selected were asked to rate their instructors with the newly developed instrument. ^ Question 1 was answered using exploratory factor analysis, and yielded a factor structure similar to the Seven Principles.^ Question 2 was answered by separately factor-analyzing the responses of male and female students and comparing the factor structures. The resulting factor structures for men and women were different. However, 14 items could be realigned under five factors that paralleled some of the Seven Principles. When the scores of only those 14 items were entered in two principal components factor analyses using only men and only women, respectively and restricting the factor structure to five factors, the factor structures were the same for men and women.^ A weak positive relationship between students’ expected grades and their scores on the instrument was found (Question 3). There was no relationship between students’ perceived workloads for the course and their scores on the instrument (Question 4).^