Difference between revisions of "White Learners in North America"

From Penn Center for Learning Analytics Wiki
Jump to navigation Jump to search
(added Zhang article)
(addition new paper)
 
(6 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Bridgeman et al. (2009) [https://www.researchgate.net/publication/242203403_Considering_Fairness_and_Validity_in_Evaluating_Automated_Scoring pdf]
Bridgeman et al. (2009) [https://www.researchgate.net/publication/242203403_Considering_Fairness_and_Validity_in_Evaluating_Automated_Scoring pdf]
* Automated scoring models for evaluating English essays, or e-rater  
* Automated scoring models for evaluating English essays, or e-rater  
Line 9: Line 10:
* Slightly better accuracy (~1%) across racial groups when including race in model
* Slightly better accuracy (~1%) across racial groups when including race in model


Zhang et al. (in press) [https://www.upenn.edu/learninganalytics/ryanbaker/EDM22_paper_35.pdf]
Zhang et al. (2022) [https://www.upenn.edu/learninganalytics/ryanbaker/EDM22_paper_35.pdf]
* Detecting student use of self-regulated learning (SRL) in mathematical problem-solving process
* Detecting student use of self-regulated learning (SRL) in mathematical problem-solving process
* For each SRL-related detector, relatively small differences in AUC were observed across racial/ethnic groups.  
* For each SRL-related detector, relatively small differences in AUC were observed across racial/ethnic groups.  
* No racial/ethnic group consistently had best-performing detectors
* No racial/ethnic group consistently had best-performing detectors
Li, Xing, & Leite (2022) [https://dl.acm.org/doi/pdf/10.1145/3506860.3506869?casa_token=OZmlaKB9XacAAAAA:2Bm5XYi8wh4riSmEigbHW_1bWJg0zeYqcGHkvfXyrrx_h1YUdnsLE2qOoj4aQRRBrE4VZjPrGw pdf]
* Models predicting whether two students will communicate on an online discussion forum
* Compared members of overrepresented racial groups to members of underrepresented racial groups (overrepresented group approximately 90% White)
* Multiple fairness approaches lead to ABROCA of under 0.01 for overrepresented versus underrepresented students
Sulaiman & Roy (2022) [https://fated2022.github.io/assets/pdf/FATED-2022_paper_Sulaiman_Transformers.pdf]
* Models predicting whether a law student will pass the bar exam (to practice law)
* Compared White and non-White students
* Models not applying fairness constraints performed significantly worse for White students in terms of ABROCA
* Models applying fairness constraints performed equivalently for White and non-White students
Jeong et al. (2022) [https://fated2022.github.io/assets/pdf/FATED-2022_paper_Jeong_Racial_Bias_ML_Algs.pdf]
* Predicting 9th grade math score from academic performance, surveys, and demographic information
* Despite comparable accuracy, model tends to overpredict White students' performance
* Several fairness correction methods equalize false positive and false negative rates across groups.
Permodo et al. (2023) [https://www.researchgate.net/publication/370001437_Difficult_Lessons_on_Social_Prediction_from_Wisconsin_Public_Schools pdf]
* Paper discusses system that predicts probabilities of on-time graduation
* Prediction is less accurate for White students than other students
Zhang et al.(2023) [https://learninganalytics.upenn.edu/ryanbaker/ISLS23_annotation%20detector_short_submit.pdf pdf]
* Models developed to detect attributes of student feedback for other students’ mathematics solutions, reflecting the presence of three constructs:1) commenting on process, 2) commenting on the answer, and 3) relating to self.
*Models have approximately equal performance for White, African American and Hispanic/Latinx students.
Almoubayyed et al. (2023) [https://educationaldatamining.org/EDM2023/proceedings/2023.EDM-long-papers.18/2023.EDM-long-papers.18.pdf pdf]
* Models discovering generalization of the performance for reading comprehension ability in the context of middle school students’ usage of Carnegie Learning’s ITS for mathematics instruction
* Model trained on smaller dataset achieves greater fairness in prediction for white and non-white students
* For model trained on larger dataset, prediction is more accurate for white students than for non-white students.

Latest revision as of 13:05, 17 August 2023

Bridgeman et al. (2009) pdf

  • Automated scoring models for evaluating English essays, or e-rater
  • The score difference between human rater and e-rater was significantly smaller for 11th grade essays written by White and African American students than other groups


Jiang & Pardos (2021) pdf

  • Predicting university course grades using LSTM
  • Roughly equal accuracy across racial groups
  • Slightly better accuracy (~1%) across racial groups when including race in model

Zhang et al. (2022) [1]

  • Detecting student use of self-regulated learning (SRL) in mathematical problem-solving process
  • For each SRL-related detector, relatively small differences in AUC were observed across racial/ethnic groups.
  • No racial/ethnic group consistently had best-performing detectors


Li, Xing, & Leite (2022) pdf

  • Models predicting whether two students will communicate on an online discussion forum
  • Compared members of overrepresented racial groups to members of underrepresented racial groups (overrepresented group approximately 90% White)
  • Multiple fairness approaches lead to ABROCA of under 0.01 for overrepresented versus underrepresented students


Sulaiman & Roy (2022) [2]

  • Models predicting whether a law student will pass the bar exam (to practice law)
  • Compared White and non-White students
  • Models not applying fairness constraints performed significantly worse for White students in terms of ABROCA
  • Models applying fairness constraints performed equivalently for White and non-White students


Jeong et al. (2022) [3]

  • Predicting 9th grade math score from academic performance, surveys, and demographic information
  • Despite comparable accuracy, model tends to overpredict White students' performance
  • Several fairness correction methods equalize false positive and false negative rates across groups.


Permodo et al. (2023) pdf

  • Paper discusses system that predicts probabilities of on-time graduation
  • Prediction is less accurate for White students than other students


Zhang et al.(2023) pdf

  • Models developed to detect attributes of student feedback for other students’ mathematics solutions, reflecting the presence of three constructs:1) commenting on process, 2) commenting on the answer, and 3) relating to self.
  • Models have approximately equal performance for White, African American and Hispanic/Latinx students.


Almoubayyed et al. (2023) pdf

  • Models discovering generalization of the performance for reading comprehension ability in the context of middle school students’ usage of Carnegie Learning’s ITS for mathematics instruction
  • Model trained on smaller dataset achieves greater fairness in prediction for white and non-white students
  • For model trained on larger dataset, prediction is more accurate for white students than for non-white students.