Difference between revisions of "Course Grade and GPA Prediction"

From Penn Center for Learning Analytics Wiki
Jump to navigation Jump to search
(Add Svabensky@EDM'24)
 
(11 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Švábenský et al. (2024) [https://educationaldatamining.org/edm2024/proceedings/2024.EDM-posters.82/2024.EDM-posters.82.pdf pdf]
* Classification models for predicting grades (worse than an average grade, “unsuccessful”, or equal/better than an average grade, “successful”)
* Investigating bias based on university students' regional background in the context of the Philippines
* Demographic groups based on 1 of 5 locations from which students accessed online courses in Canvas
* Bias evaluation using AUC, weighted F1-score, and MADD showed consistent results across all groups, no unfairness was observed
Lee and Kizilcec (2020) [https://arxiv.org/pdf/2007.00088.pdf pdf]
Lee and Kizilcec (2020) [https://arxiv.org/pdf/2007.00088.pdf pdf]


* Models predicting college success (or median grade or above)
* Models predicting college success (or median grade or above)
*Random forest algorithms performed significantly worse for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than non-URM students (White and Asian), for male students than female students
*Random forest algorithms performed significantly worse for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than non-URM students (White and Asian), for male students than female students
* The fairness of the model for URM and male students, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values
*Random forest algorithms performed significantly worse for male students than female students
 
* The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values from 0.5 to group-specific values<br />




Yu et al. (2020) [https://files.eric.ed.gov/fulltext/ED608066.pdf pdf]
Yu et al. (2020) [https://files.eric.ed.gov/fulltext/ED608066.pdf pdf]


* Model predicting undergraduate short-term (course grades) and long-term (average GPA) success
* Models predicting undergraduate course grades and average GPA


* Students who are international, first-generation, or from low-income households were inaccurately predicted to get lower course grade and average GPA than their peer, and fairness of models improved with the inclusion of clickstream and survey data
* Students who are international, first-generation, or from low-income households were inaccurately predicted to get lower course grade and average GPA than their peer, and fairness of models improved with the inclusion of clickstream and survey data
Line 21: Line 29:
* More male students were predicted to pass the course than female students, but  this overestimation was fairly small and not consistent across different algorithms
* More male students were predicted to pass the course than female students, but  this overestimation was fairly small and not consistent across different algorithms
*Among the algorithms, Naive Bayes had the lowest normalized mutual information value and the highest ABROCA value, or differences between the area under curve
*Among the algorithms, Naive Bayes had the lowest normalized mutual information value and the highest ABROCA value, or differences between the area under curve
* Students with self-declared disability were predicted to pass the course more often
Jiang & Pardos (2021) [https://dl.acm.org/doi/pdf/10.1145/3461702.3462623 pdf]
* Predicting university course grades using LSTM
* Roughly equal accuracy across racial groups
* Slightly better accuracy (~1%) across racial groups when including race in model
Kung & Yu (2020)
[https://dl.acm.org/doi/pdf/10.1145/3386527.3406755 pdf]
* Predicting course grades and later GPA at public U.S. university
* Five algorithms and three metrics (independence, separation, sufficiency) analyzed
* Poorer performance for Latinx students on course grade prediction for all three metrics; poorer performance for Latinx students on GPA prediction in terms of independence and sufficiency, but not separation
* Poorer performance for first-generation students on course grade prediction for independence and separation, and for some algorithms for GPA prediction as well
* Poorer performance for low-income students in several cases, about 1/3 of cases checked
Jeong et al. (2022) [https://fated2022.github.io/assets/pdf/FATED-2022_paper_Jeong_Racial_Bias_ML_Algs.pdf]
* Predicting 9th grade math score from academic performance, surveys, and demographic information
* Despite comparable accuracy, model tends to overpredict Asian and White students' performance, and underpredict Black, Hispanic, and Native American students' performance
* Several fairness correction methods equalize false positive and false negative rates across groups.
Sha et al. (2022) [https://ieeexplore.ieee.org/abstract/document/9849852]
* Predicting course pass/fail with random forest in Open University data
* A range of over-sampling methods tested
* Regardless of over-sampling method used, course pass/fail performance was moderately better for males


* Students with self-declared disability were predicted to pass the course with 16-23 percentage points in favor from the training and test set
Deho et al. (2023) [https://files.osf.io/v1/resources/5am9z/providers/osfstorage/63eaf170a3fade041fe7c9db?format=pdf&action=download&direct&version=1]
* Predicting whether course grade will be above or below 0.5
* Better prediction for female students in some courses, better prediction for male students in other courses
* Generally worse prediction for international students

Latest revision as of 20:06, 1 September 2024

Švábenský et al. (2024) pdf

  • Classification models for predicting grades (worse than an average grade, “unsuccessful”, or equal/better than an average grade, “successful”)
  • Investigating bias based on university students' regional background in the context of the Philippines
  • Demographic groups based on 1 of 5 locations from which students accessed online courses in Canvas
  • Bias evaluation using AUC, weighted F1-score, and MADD showed consistent results across all groups, no unfairness was observed


Lee and Kizilcec (2020) pdf

  • Models predicting college success (or median grade or above)
  • Random forest algorithms performed significantly worse for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than non-URM students (White and Asian), for male students than female students
  • Random forest algorithms performed significantly worse for male students than female students
  • The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values from 0.5 to group-specific values


Yu et al. (2020) pdf

  • Models predicting undergraduate course grades and average GPA
  • Students who are international, first-generation, or from low-income households were inaccurately predicted to get lower course grade and average GPA than their peer, and fairness of models improved with the inclusion of clickstream and survey data
  • Female students were inaccurately predicted to achieve greater short-term and long-term success than male students, and fairness of models improved when a combination of institutional and click data was used in the model


Riazy et al. (2020) pdf

  • Models predicting course outcome of students in a virtual learning environment (VLE)
  • More male students were predicted to pass the course than female students, but  this overestimation was fairly small and not consistent across different algorithms
  • Among the algorithms, Naive Bayes had the lowest normalized mutual information value and the highest ABROCA value, or differences between the area under curve
  • Students with self-declared disability were predicted to pass the course more often


Jiang & Pardos (2021) pdf

  • Predicting university course grades using LSTM
  • Roughly equal accuracy across racial groups
  • Slightly better accuracy (~1%) across racial groups when including race in model


Kung & Yu (2020) pdf

  • Predicting course grades and later GPA at public U.S. university
  • Five algorithms and three metrics (independence, separation, sufficiency) analyzed
  • Poorer performance for Latinx students on course grade prediction for all three metrics; poorer performance for Latinx students on GPA prediction in terms of independence and sufficiency, but not separation
  • Poorer performance for first-generation students on course grade prediction for independence and separation, and for some algorithms for GPA prediction as well
  • Poorer performance for low-income students in several cases, about 1/3 of cases checked


Jeong et al. (2022) [1]

  • Predicting 9th grade math score from academic performance, surveys, and demographic information
  • Despite comparable accuracy, model tends to overpredict Asian and White students' performance, and underpredict Black, Hispanic, and Native American students' performance
  • Several fairness correction methods equalize false positive and false negative rates across groups.


Sha et al. (2022) [2]

  • Predicting course pass/fail with random forest in Open University data
  • A range of over-sampling methods tested
  • Regardless of over-sampling method used, course pass/fail performance was moderately better for males


Deho et al. (2023) [3]

  • Predicting whether course grade will be above or below 0.5
  • Better prediction for female students in some courses, better prediction for male students in other courses
  • Generally worse prediction for international students