Difference between revisions of "Socioeconomic Status"

From Penn Center for Learning Analytics Wiki
Jump to navigation Jump to search
(Created page with "Yudelson et al. (2014) [https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.659.872&rep=rep1&type=pdf pdf] * Models structuring schools into reliably discernible groups * Models trained on schools with a high proportion of low-SES student performed worse than those trained with medium or low proportion * Models trained on schools with low, medium proportion of SES students performed similarly well for schools with high proportions of low-SES students")
 
 
(16 intermediate revisions by 4 users not shown)
Line 1: Line 1:
Yudelson et al. (2014) [https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.659.872&rep=rep1&type=pdf pdf]
Yudelson et al. (2014) [https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.659.872&rep=rep1&type=pdf pdf]
* Models structuring schools into reliably discernible groups
 
* Models discovering generalizable sub-populations of students across different schools to predict students' learning with Carnegie Learning’s Cognitive Tutor (CLCT)
 
* Models trained on schools with a high proportion of low-SES student performed worse than those trained with medium or low proportion
* Models trained on schools with a high proportion of low-SES student performed worse than those trained with medium or low proportion
* Models trained on schools with low, medium  proportion of SES students performed similarly well for schools with high proportions of low-SES students
* Models trained on schools with low, medium  proportion of SES students performed similarly well for schools with high proportions of low-SES students
Yu et al. (2020) [https://files.eric.ed.gov/fulltext/ED608066.pdf pdf]
* Models predicting undergraduate course grades and average GPA
* Students from low-income households were inaccurately predicted to perform worse for both short-term (final course grade) and long-term (GPA)
* Fairness of model improved if it included only clickstream and survey data
Yu et al. (2021) [https://dl.acm.org/doi/pdf/10.1145/3430895.3460139 pdf]
*Models predicting college dropout for students in residential and fully online program
*Whether the socio-demographic information was included or not, the model showed worse accuracy and true negative rates for residential students with greater financial needs
*The model showed better recall for students with greater financial needs, especially for those studying in person
Kung & Yu (2020)
[https://dl.acm.org/doi/pdf/10.1145/3386527.3406755 pdf]
* Predicting course grades and later GPA at public U.S. university
* Equal performance for low-income and upper-income students in course grade prediction for several algorithms and metrics
* Worse performance on independence for low-income students than high-income students in later GPA prediction for four of five algorithms; one algorithm had worse separation and two algorithms had worse sufficiency
Litman et al. (2021) [https://link.springer.com/chapter/10.1007/978-3-030-78292-4_21 html]
* Automated essay scoring models inferring text evidence usage
* All algorithms studied have less than 1% of error explained by whether student receives free/reduced price lunch
Queiroga et al. (2022) [https://www.mdpi.com/2078-2489/13/9/401 pdf]
* Models predicting secondary school students at risk of failure or dropping out
* Model was unable to make prediction of student success (F1 score = 0.0) for students not in a social welfare program (higher socioeconomic status)
* Model had slightly lower AUC ROC (0.52 instead of 0.56) for students not in a social welfare program (higher socioeconomic status)
Permodo et al. (2023) [https://www.researchgate.net/publication/370001437_Difficult_Lessons_on_Social_Prediction_from_Wisconsin_Public_Schools pdf]
* Paper discusses system that predicts probabilities of on-time graduation
* Prediction is more accurate for low-income students than non-low-income students
Cock et al.(2023) [[https://dl.acm.org/doi/abs/10.1145/3576050.3576149?casa_token=6Fjh-EUzN-gAAAAA%3AtpRMYzSAVoQFYNzwY5gwSsrnzHIlI0tUjMq6okwgdcCUmuBMVZEtn8eLO52dCtIYUbrHBV_Il9Sx pdf]]
* Paper investigates biases in models designed to early identify middle school students at risk of failing in flipped-classroom course and open-ended exploration environment (TugLet)
* Model performs worse for students from school with higher socio-economic status in open-ended environment (FNR=0.73 for higher SES and FNR=0.57 for medium SES).

Latest revision as of 00:14, 28 November 2023

Yudelson et al. (2014) pdf

  • Models discovering generalizable sub-populations of students across different schools to predict students' learning with Carnegie Learning’s Cognitive Tutor (CLCT)
  • Models trained on schools with a high proportion of low-SES student performed worse than those trained with medium or low proportion
  • Models trained on schools with low, medium proportion of SES students performed similarly well for schools with high proportions of low-SES students


Yu et al. (2020) pdf

  • Models predicting undergraduate course grades and average GPA
  • Students from low-income households were inaccurately predicted to perform worse for both short-term (final course grade) and long-term (GPA)
  • Fairness of model improved if it included only clickstream and survey data


Yu et al. (2021) pdf

  • Models predicting college dropout for students in residential and fully online program
  • Whether the socio-demographic information was included or not, the model showed worse accuracy and true negative rates for residential students with greater financial needs
  • The model showed better recall for students with greater financial needs, especially for those studying in person


Kung & Yu (2020) pdf

  • Predicting course grades and later GPA at public U.S. university
  • Equal performance for low-income and upper-income students in course grade prediction for several algorithms and metrics
  • Worse performance on independence for low-income students than high-income students in later GPA prediction for four of five algorithms; one algorithm had worse separation and two algorithms had worse sufficiency


Litman et al. (2021) html

  • Automated essay scoring models inferring text evidence usage
  • All algorithms studied have less than 1% of error explained by whether student receives free/reduced price lunch


Queiroga et al. (2022) pdf

  • Models predicting secondary school students at risk of failure or dropping out
  • Model was unable to make prediction of student success (F1 score = 0.0) for students not in a social welfare program (higher socioeconomic status)
  • Model had slightly lower AUC ROC (0.52 instead of 0.56) for students not in a social welfare program (higher socioeconomic status)


Permodo et al. (2023) pdf

  • Paper discusses system that predicts probabilities of on-time graduation
  • Prediction is more accurate for low-income students than non-low-income students


Cock et al.(2023) [pdf]

  • Paper investigates biases in models designed to early identify middle school students at risk of failing in flipped-classroom course and open-ended exploration environment (TugLet)
  • Model performs worse for students from school with higher socio-economic status in open-ended environment (FNR=0.73 for higher SES and FNR=0.57 for medium SES).