Difference between revisions of "Black/African-American Learners in North America"

From Penn Center for Learning Analytics Wiki
Jump to navigation Jump to search
Line 28: Line 28:




Yu et al. (2021) [[https://dl.acm.org/doi/pdf/10.1145/3430895.3460139 pdf]]
Yu et al. (2021) [https://dl.acm.org/doi/pdf/10.1145/3430895.3460139 pdf]
* Models predicting college dropout for students in residential and fully online program
* Models predicting college dropout for students in residential and fully online program
* Whether the protected attributed were included or not, the models had worse true negative rates but better recall for underrepresented minority (URM) students, in residential and online programs
* Whether the protected attributed were included or not, the models had worse true negative rates but better recall for underrepresented minority (URM) students, in residential and online programs

Revision as of 11:48, 16 May 2022

Kai et al. (2017) pdf

  • Models predicting student retention in an online college program
  • J48 decision trees achieved much lower Kappa and AUC for Black students than White students
  • JRip decision rules achieved almost identical Kappa and AUC for Black students and White students


Hu and Rangwala (2020) pdf

  • Models predicting if a college student will fail in a course
  • Multiple cooperative classifier model (MCCM) model was the best at reducing bias, or discrimination against African-American students, while other models (particularly Logistic Regression and Rawlsian Fairness) performed far worse
  • The level of bias was inconsistent across courses, with MCCM prediction showing the least bias for Psychology and the greatest bias for Computer Science


Christie et al. (2019) pdf

  • Models predicting student's high school dropout
  • The decision trees showed little difference in AUC among White, Black, Hispanic, Asian, American Indian and Alaska Native, and Native Hawaiian and Pacific Islander.


Lee and Kizilcec (2020) pdf

  • Models predicting college success (or median grade or above)
  • Random forest algorithms performed significantly worse for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than non-URM students (White and Asian)
  • The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values


Yu et al. (2020) pdf

  • Model predicting undergraduate short-term (course grades) and long-term (average GPA) success
  • Black students were inaccurately predicted to perform worse for both short-term and long-term
  • The fairness of models improved when either click or a combination of click and survey data, and not institutional data, was included in the model


Yu et al. (2021) pdf

  • Models predicting college dropout for students in residential and fully online program
  • Whether the protected attributed were included or not, the models had worse true negative rates but better recall for underrepresented minority (URM) students, in residential and online programs
  • The model was less accurate for URM students studying in residential program


Ramineni & Williamson (2018) pdf

  • Revised automated scoring engine for assessing GRE essay
  • Relative weakness in content and organization by African American test takers resulted in lower scores than Chinese peers who wrote longer


Bridgeman et al. (2009) pdf

  • Automated scoring models for evaluating English essays, or e-rater
  • E-rater gave significantly higher score for 11th grade essays written by Asian American and Hispanic students, particularly, Hispanic female students
  • The score difference between human rater and e-rater was significantly smaller for 11th grade essays written by White and African American students
  • E-rater gave slightly lower score for GRE essays (argument and issue) written by Black test-takers while e-rated scores were higher for Asian test-takers in the U.S


Bridgeman, Trapani, and Attali (2012) pdf

  • A later version of automated scoring models for evaluating English essays, or e-rater
  • E-rater gave slightly lower scores for African-American, Hispanic, and American-Indian test-takers, particularly lower for African-American, and American-Indian males, when assessing written responses to issue prompt in GRE
  • The score was significantly lower when e-rater was assessing GRE written responses to argument prompt by African-American test-takers