à procura do devido lugar das avaliações dos estudantes

Hoje enviaram-me um link, e fui ler os artigos.
Interessante:um pouco do que já sabíamos, que as avaliações dos professores feitas pelos estudantes são tendenciosas e  enviesadas. Como todas as avaliações, no geral.
Por isso, as respostas aos inquéritos aos estudantes devem ter um lugar próprio, respeitando as suas perceções mas não devem servir para ajuizar sobre a efetividade do ensino ou o valor do curso. Com o peso das evidências, em vários estudos, façam-se convergir outras metodologias de avaliação e melhoria contínua, que não apenas os inquéritos…
The truth will set you free, but rst it will piss you of.
Gloria Steinem
Abstract
Student evaluations of teaching (SET) are widely used in academic personnel decisions as a measure of teaching eectiveness. We show:
– SET are biased against female instructors by an amount that is large and statistically signicant
– the bias aects how students rate even putatively objective aspects of teaching, such as how promptly assignments are graded
– the bias varies by discipline and by student gender, among other things
– it is not possible to adjust for the bias, because it depends on so many factors
– SET are more sensitive to students’ gender bias and grade expectations than they are to teaching effectiveness
– gender biases can be large enough to cause more eective instructors to get lower SET than less effective instructors.
These findings are based on nonparametric statistical tests applied to two datasets: 23,001 SET of 379 instructors by 4,423 students in six mandatory firrst-year courses in a ve-year natural experiment at a French university, and
43 SET for four sections of an online course in a randomized, controlled, blind experiment at a US university.
…….
Conclusion
In two very dierent universities and in a broad range of course topics, SET measure students’ gender biases better than they measure the instructor’s teaching effectiveness. Overall, SET disadvantage female instructors. There is no evidence that this is the exception rather than the rule. Hence, the onus should be on universities that rely on SET for employment decisions to provide convincing aaffirmative evidence that such reliance does not have disparate impact on women, under-represented minorities, or other protected groups. Because the bias varies by course and institution, affirmative evidence needs to be specic to a given course in a given department in a given university. Absent such specic evidence, SET should not be used for personnel decisions.
Jan 2016

Students Praise Male Professors

Study finds gender of instructors influences evaluations they receive, even if they have fooled students (in an online course) about whether they are men or women.

 December 10, 2014
 College students’ assessments of their instructors’ teaching ability is linked to whether they think those instructors are male or female, according to new research from North Carolina State University.

In the study, students in an online course gave better evaluations to the instructors they thought were male, even though the two instructors – one male and one female – had switched their identities. The research is based on a small pilot study of one class. (….) With just 43 subjects, this study was a pilot; the authors plan to expand their research with more classes and different types of courses. Still, higher education administrators should be aware of the findings when using evaluations to make faculty decisions, since evaluations could reflect a gender bias rather than an actual difference in teaching abilities, MacNell said. 

 

Philip B. Stark – Richard Freishtat

Recap
● SET does not measure teaching effectiveness.
● Controlled, randomized experiments find that SET ratings are negatively associated with direct measures of effectiveness. SET seem to be influenced by the gender, ethnicity, and attractiveness of the instructor.
● Summary items such as “overall effectiveness” seem most influenced by irrelevant factors.
● Student comments contain valuable information about students’ experiences.
● Survey response rates matter. Low response rates make it impossible to generalize reliably from the respondents to the whole class.
● It is practical and valuable to have faculty observe each other’s classes.
● It is practical and valuable to create and review teaching portfolios.
● Teaching is unlikely to improve without serious, regular attention.
Recommendat
1. Drop omnibus items about “overall teaching effectiveness” and “value of the course” from teaching evaluations: They are misleading.
2. Do not average or compare averages of SET scores: Such averages do not make sense statistically. Instead, report the distribution of scores, the number of responders, and the response rate.
3. When response rates are low, extrapolating from responders to the whole class is unreliable.
4. Pay attention to student comments — but understand their limitations. Students typically are not well situated to evaluate pedagogy.
5. Avoid comparing teaching in courses of different types, levels, sizes, functions, or disciplines.
6.Use teaching portfolios as part of the review process.
7. Use classroom observation as part of milestone reviews.
8. To improve teaching and evaluate teaching fairly and honestly, spend more time observing the teaching and looking at teaching materials.

Deixe uma Resposta

Preencha os seus detalhes abaixo ou clique num ícone para iniciar sessão:

Logótipo da WordPress.com

Está a comentar usando a sua conta WordPress.com Terminar Sessão / Alterar )

Imagem do Twitter

Está a comentar usando a sua conta Twitter Terminar Sessão / Alterar )

Facebook photo

Está a comentar usando a sua conta Facebook Terminar Sessão / Alterar )

Google+ photo

Está a comentar usando a sua conta Google+ Terminar Sessão / Alterar )

Connecting to %s