Difference between revisions of "National Origin or National Location"

From Penn Center for Learning Analytics Wiki
Jump to navigation Jump to search
Line 10: Line 10:
* Chinese students were given higher scores than when graded by human essay raters
* Chinese students were given higher scores than when graded by human essay raters
*Speakers of Arabic and Hindi were given lower scores
*Speakers of Arabic and Hindi were given lower scores
Ogan and colleagues (2015) [[https://link.springer.com/content/pdf/10.1007/s40593-014-0034-8.pdf pdf]]
 


Ogan and colleagues (2015) [https://link.springer.com/content/pdf/10.1007/s40593-014-0034-8.pdf pdf]
Ogan and colleagues (2015) [https://link.springer.com/content/pdf/10.1007/s40593-014-0034-8.pdf pdf]

Revision as of 00:46, 12 May 2022

Bridgeman, Trapani, and Attali (2009) [pdf]

  • E-Rater system that automatically grades a student’s essay
  • Inaccurately high scores were given to Chinese and Korean students
  • System showed poor correlation for GRE essay scores of Chinese students


Bridgeman, Trapani, and Attali (2012) pdf

  • A later version of E-Rater system for automatic grading of GSE essay
  • Chinese students were given higher scores than when graded by human essay raters
  • Speakers of Arabic and Hindi were given lower scores


Ogan and colleagues (2015) pdf

  • Multi-national model predicting learning gains from student's help-seeking behavior
  • Both U.S. and combined model performed extremely poorly for Costa Rica
  • U.S. model outperformed for Philippines than when trained with its own data set


Li et al. (2021) pdf

  • Model predicting student achievement on the standardized examination PISA
  • Inaccuracy of the U.S.-trained model was greater for students from countries with lower scores of national development (e.g. Indonesia, Vietnam, Moldova)


Wang et al. (2018) pdf

  • Automated scoring model for evaluating English spoken responses
  • SpeechRater gave a significantly lower score than human raters for German
  • SpeechRater scored in favor of Chinese group, with H1-rater scores higher than mean


Bridgeman et al. (2009) pdf

  • Automated scoring models for evaluating English essays, or e-rater
  • E-rater gave significantly higher score for students from China and South Korea than 14 other countries when assessing independent writing task in Test of English as a Foreign Language (TOEFL)
  • E-rater gave slightly higher scores for GRE analytical writing, both argument and issue prompts, by students from China whose written responses tended to be the longest and below average on grammar, usage and mechanics