<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.pcla.wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Joyce</id>
	<title>Penn Center for Learning Analytics Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.pcla.wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Joyce"/>
	<link rel="alternate" type="text/html" href="https://www.pcla.wiki/index.php/Special:Contributions/Joyce"/>
	<updated>2026-05-03T12:26:07Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.37.1</generator>
	<entry>
		<id>https://www.pcla.wiki/index.php?title=Gender:_Male/Female&amp;diff=351</id>
		<title>Gender: Male/Female</title>
		<link rel="alternate" type="text/html" href="https://www.pcla.wiki/index.php?title=Gender:_Male/Female&amp;diff=351"/>
		<updated>2022-06-03T17:24:05Z</updated>

		<summary type="html">&lt;p&gt;Joyce: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Kai et al. (2017) [https://www.upenn.edu/learninganalytics/ryanbaker/DLRN-eVersity.pdf pdf]&lt;br /&gt;
* Models predicting student retention in an online college program&lt;br /&gt;
* J48 decision trees achieved significantly lower Kappa but higher AUC for male students than female students&lt;br /&gt;
* JRip decision rules achieved much lower Kappa and AUC for male students than female students&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Christie et al. (2019) [https://files.eric.ed.gov/fulltext/ED599217.pdf pdf]&lt;br /&gt;
* Models predicting student's high school dropout&lt;br /&gt;
* The decision trees showed very minor differences in AUC between female and male students&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hu and Rangwala (2020) [https://files.eric.ed.gov/fulltext/ED608050.pdf pdf]&lt;br /&gt;
* Models predicting if a college student will fail in a course&lt;br /&gt;
* Multiple cooperative classifier model (MCCM) model was the best at reducing bias, or discrimination against male students, performing particularly better for Psychology course.&lt;br /&gt;
* Other models (Logistic Regression and Rawlsian Fairness) performed far worse for male students, performing particularly worse in Computer Science and Electrical Engineering.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Anderson et al. (2019) [https://www.upenn.edu/learninganalytics/ryanbaker/EDM2019_paper56.pdf pdf]&lt;br /&gt;
* Models predicting six-year college graduation&lt;br /&gt;
* False negatives rates were greater for male students than female students when SVM, Logistic Regression, and SGD were used&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Gardner, Brooks and Baker (2019) [https://www.upenn.edu/learninganalytics/ryanbaker/LAK_PAPER97_CAMERA.pdf pdf]&lt;br /&gt;
* Model predicting MOOC dropout, specifically through slicing analysis&lt;br /&gt;
* Some algorithms studied performed worse for female students than male students, particularly in courses with 45% or less male presence&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Riazy et al. (2020) [https://www.scitepress.org/Papers/2020/93241/93241.pdf pdf]&lt;br /&gt;
* Model predicting course outcome&lt;br /&gt;
* Marginal differences were found for prediction quality and in overall proportion of predicted pass between groups&lt;br /&gt;
* Inconsistent in direction between algorithms.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Lee and Kizilcec (2020) [https://arxiv.org/pdf/2007.00088.pdf pdf]&lt;br /&gt;
* Models predicting college success (or median grade or above)&lt;br /&gt;
* Random forest algorithms performed significantly worse for male students than female students&lt;br /&gt;
* The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values from 0.5 to group-specific values&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yu et al. (2020) [https://files.eric.ed.gov/fulltext/ED608066.pdf pdf]&lt;br /&gt;
* Model predicting undergraduate short-term (course grades) and long-term (average GPA) success&lt;br /&gt;
* Female students were inaccurately predicted to achieve greater short-term and long-term success than male students.&lt;br /&gt;
* The fairness of models improved when a combination of institutional and click data was used in the model&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yu et al. (2021) [https://dl.acm.org/doi/pdf/10.1145/3430895.3460139 pdf]&lt;br /&gt;
* Models predicting college dropout for students in residential and fully online program&lt;br /&gt;
* Whether the socio-demographic information was included or not, the model showed worse true negative rates and worse accuracy for male students&lt;br /&gt;
* The model showed better recall for male students, especially for those studying in person&lt;br /&gt;
* The difference in recall and true negative rates were lower, and thus fairer, for male students studying online if their socio-demographic information was not included in the model&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Riazy et al. (2020) [https://www.scitepress.org/Papers/2020/93241/93241.pdf pdf]&lt;br /&gt;
* Models predicting course outcome of students in a virtual learning environment (VLE)&lt;br /&gt;
* More male students were predicted to pass the course than female students, but  this overestimation was fairly small and not consistent across different algorithms&lt;br /&gt;
* Among the algorithms, Naive Bayes had the lowest normalized mutual information value and the highest ABROCA value&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bridgeman et al. (2009) &lt;br /&gt;
[https://www.researchgate.net/publication/242203403_Considering_Fairness_and_Validity_in_Evaluating_Automated_Scoring pdf]&lt;br /&gt;
&lt;br /&gt;
* Automated scoring models for evaluating English essays, or e-rater&lt;br /&gt;
* E-Rater system performed comparably accurately for male and female students when assessing their 11th grade essays&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bridgeman et al. (2012) [https://www.tandfonline.com/doi/pdf/10.1080/08957347.2012.635502?needAccess=true pdf]&lt;br /&gt;
* A later version of automated scoring models for evaluating English essays, or e-rater&lt;br /&gt;
* E-Rater system correlated comparably well with human rater when assessing TOEFL and GRE essays written by male and female students&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Verdugo et al. (2022) [https://https://www.researchgate.net/profile/Jonathan-Vasquez-Verdugo/publication/359176069_FairEd_A_Systematic_Fairness_Analysis_Approach_Applied_in_a_Higher_Educational_Context/links/622ba9e89f7b324634245afa/FairEd-A-Systematic-Fairness-Analysis-Approach-Applied-in-a-Higher-Educational-Context.pdf pdf]&lt;br /&gt;
* An algorithm predicting dropout from university after the first year&lt;br /&gt;
* Several algorithms achieved better AUC for male than female students; results were mixed for F1.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zhang et al. (in press)&lt;br /&gt;
* Detecting student use of self-regulated learning (SRL) in mathematical problem-solving process&lt;br /&gt;
* For each SRL-related detector, relatively small differences in AUC were observed across gender groups. &lt;br /&gt;
* No gender group consistently had best-performing detectors&lt;/div&gt;</summary>
		<author><name>Joyce</name></author>
	</entry>
	<entry>
		<id>https://www.pcla.wiki/index.php?title=Black/African-American_Learners_in_North_America&amp;diff=350</id>
		<title>Black/African-American Learners in North America</title>
		<link rel="alternate" type="text/html" href="https://www.pcla.wiki/index.php?title=Black/African-American_Learners_in_North_America&amp;diff=350"/>
		<updated>2022-06-03T17:23:08Z</updated>

		<summary type="html">&lt;p&gt;Joyce: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Kai et al. (2017) [https://www.upenn.edu/learninganalytics/ryanbaker/DLRN-eVersity.pdf pdf]&lt;br /&gt;
* Models predicting student retention in an online college program&lt;br /&gt;
* J48 decision trees achieved much lower Kappa and AUC for Black students than White students&lt;br /&gt;
* JRip decision rules achieved almost identical Kappa and AUC for Black students and White students&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hu and Rangwala (2020) [https://files.eric.ed.gov/fulltext/ED608050.pdf pdf]&lt;br /&gt;
* Models predicting if a college student will fail in a course&lt;br /&gt;
* Multiple cooperative classifier model (MCCM) model was the best at reducing bias, or discrimination against African-American students, while other models (particularly Logistic Regression and Rawlsian Fairness) performed far worse&lt;br /&gt;
* The level of bias was inconsistent across courses, with MCCM prediction showing the least bias for Psychology and the greatest bias for Computer Science&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Christie et al. (2019) [https://files.eric.ed.gov/fulltext/ED599217.pdf pdf]&lt;br /&gt;
* Models predicting student's high school dropout&lt;br /&gt;
* The decision trees showed little difference in AUC among White, Black, Hispanic, Asian, American Indian and Alaska Native, and  Native Hawaiian and Pacific Islander.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Lee and Kizilcec (2020) [https://arxiv.org/pdf/2007.00088.pdf pdf]&lt;br /&gt;
* Models predicting college success (or median grade or above)&lt;br /&gt;
* Random forest algorithms performed significantly worse for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than non-URM students (White and Asian)&lt;br /&gt;
* The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values from 0.5 to group-specific values&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yu et al. (2020) [https://files.eric.ed.gov/fulltext/ED608066.pdf pdf]&lt;br /&gt;
* Model predicting undergraduate short-term (course grades) and long-term (average GPA) success&lt;br /&gt;
* Black students were inaccurately predicted to perform worse for both short-term and long-term&lt;br /&gt;
* The fairness of models improved when either click or a combination of click and survey data, and not institutional data, was included in the model&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yu et al. (2021) [https://dl.acm.org/doi/pdf/10.1145/3430895.3460139 pdf]&lt;br /&gt;
* Models predicting college dropout for students in residential and fully online program&lt;br /&gt;
* Whether the socio-demographic information was included or not, the model showed worse true negative rates for students who are underrepresented minority (URM; or not White or Asian), and worse accuracy if URM students are studying in person &lt;br /&gt;
* The model showed better recall for URM students, whether they were in residential or online program&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ramineni &amp;amp; Williamson (2018) [https://files.eric.ed.gov/fulltext/EJ1202928.pdf pdf]&lt;br /&gt;
* Revised automated scoring engine for assessing GRE essay&lt;br /&gt;
* E-rater gave African American test-takers significantly lower scores than human raters when assessing their written responses to argument prompts&lt;br /&gt;
* The shorter essays written by African American test-takers were more likely to receive lower scores as showing weakness in content and organization&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bridgeman et al. (2009) [https://www.researchgate.net/publication/242203403_Considering_Fairness_and_Validity_in_Evaluating_Automated_Scoring pdf]&lt;br /&gt;
* Automated scoring models for evaluating English essays, or e-rater &lt;br /&gt;
* The score difference between human rater and e-rater was significantly smaller for 11th grade essays written by White and African American students&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bridgeman et al. (2012) [https://www.tandfonline.com/doi/pdf/10.1080/08957347.2012.635502 pdf]&lt;br /&gt;
* A later version of automated scoring models for evaluating English essays, or e-rater&lt;br /&gt;
* E-rater gave significantly lower score than human rater when assessing African-American students’ written responses to issue prompt in GRE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Jiang &amp;amp; Pardos (2021) [https://dl.acm.org/doi/pdf/10.1145/3461702.3462623 pdf]&lt;br /&gt;
* Predicting university course grades using LSTM&lt;br /&gt;
* Roughly equal accuracy across racial groups&lt;br /&gt;
* Slightly better accuracy (~1%) across racial groups when including race in model&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zhang et al. (in press)&lt;br /&gt;
* Detecting student use of self-regulated learning (SRL) in mathematical problem-solving process&lt;br /&gt;
* For each SRL-related detector, relatively small differences in AUC were observed across racial/ethnic groups. &lt;br /&gt;
* No racial/ethnic group consistently had best-performing detectors&lt;/div&gt;</summary>
		<author><name>Joyce</name></author>
	</entry>
	<entry>
		<id>https://www.pcla.wiki/index.php?title=Black/African-American_Learners_in_North_America&amp;diff=349</id>
		<title>Black/African-American Learners in North America</title>
		<link rel="alternate" type="text/html" href="https://www.pcla.wiki/index.php?title=Black/African-American_Learners_in_North_America&amp;diff=349"/>
		<updated>2022-06-03T17:22:50Z</updated>

		<summary type="html">&lt;p&gt;Joyce: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Kai et al. (2017) [https://www.upenn.edu/learninganalytics/ryanbaker/DLRN-eVersity.pdf pdf]&lt;br /&gt;
* Models predicting student retention in an online college program&lt;br /&gt;
* J48 decision trees achieved much lower Kappa and AUC for Black students than White students&lt;br /&gt;
* JRip decision rules achieved almost identical Kappa and AUC for Black students and White students&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hu and Rangwala (2020) [https://files.eric.ed.gov/fulltext/ED608050.pdf pdf]&lt;br /&gt;
* Models predicting if a college student will fail in a course&lt;br /&gt;
* Multiple cooperative classifier model (MCCM) model was the best at reducing bias, or discrimination against African-American students, while other models (particularly Logistic Regression and Rawlsian Fairness) performed far worse&lt;br /&gt;
* The level of bias was inconsistent across courses, with MCCM prediction showing the least bias for Psychology and the greatest bias for Computer Science&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Christie et al. (2019) [https://files.eric.ed.gov/fulltext/ED599217.pdf pdf]&lt;br /&gt;
* Models predicting student's high school dropout&lt;br /&gt;
* The decision trees showed little difference in AUC among White, Black, Hispanic, Asian, American Indian and Alaska Native, and  Native Hawaiian and Pacific Islander.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Lee and Kizilcec (2020) [https://arxiv.org/pdf/2007.00088.pdf pdf]&lt;br /&gt;
* Models predicting college success (or median grade or above)&lt;br /&gt;
* Random forest algorithms performed significantly worse for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than non-URM students (White and Asian)&lt;br /&gt;
* The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values from 0.5 to group-specific values&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yu et al. (2020) [https://files.eric.ed.gov/fulltext/ED608066.pdf pdf]&lt;br /&gt;
* Model predicting undergraduate short-term (course grades) and long-term (average GPA) success&lt;br /&gt;
* Black students were inaccurately predicted to perform worse for both short-term and long-term&lt;br /&gt;
* The fairness of models improved when either click or a combination of click and survey data, and not institutional data, was included in the model&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yu et al. (2021) [https://dl.acm.org/doi/pdf/10.1145/3430895.3460139 pdf]&lt;br /&gt;
* Models predicting college dropout for students in residential and fully online program&lt;br /&gt;
* Whether the socio-demographic information was included or not, the model showed worse true negative rates for students who are underrepresented minority (URM; or not White or Asian), and worse accuracy if URM students are studying in person &lt;br /&gt;
* The model showed better recall for URM students, whether they were in residential or online program&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ramineni &amp;amp; Williamson (2018) [https://files.eric.ed.gov/fulltext/EJ1202928.pdf pdf]&lt;br /&gt;
* Revised automated scoring engine for assessing GRE essay&lt;br /&gt;
* E-rater gave African American test-takers significantly lower scores than human raters when assessing their written responses to argument prompts&lt;br /&gt;
* The shorter essays written by African American test-takers were more likely to receive lower scores as showing weakness in content and organization&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bridgeman et al. (2009) [https://www.researchgate.net/publication/242203403_Considering_Fairness_and_Validity_in_Evaluating_Automated_Scoring pdf]&lt;br /&gt;
* Automated scoring models for evaluating English essays, or e-rater &lt;br /&gt;
* The score difference between human rater and e-rater was significantly smaller for 11th grade essays written by White and African American students&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bridgeman et al. (2012) [https://www.tandfonline.com/doi/pdf/10.1080/08957347.2012.635502 pdf]&lt;br /&gt;
* A later version of automated scoring models for evaluating English essays, or e-rater&lt;br /&gt;
* E-rater gave significantly lower score than human rater when assessing African-American students’ written responses to issue prompt in GRE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Jiang &amp;amp; Pardos (2021) [https://dl.acm.org/doi/pdf/10.1145/3461702.3462623 pdf]&lt;br /&gt;
* Predicting university course grades using LSTM&lt;br /&gt;
* Roughly equal accuracy across racial groups&lt;br /&gt;
* Slightly better accuracy (~1%) across racial groups when including race in model&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zhang et al. (in press)&lt;br /&gt;
* Detecting student use of self-regulated learning(SRL) in mathematical problem-solving process&lt;br /&gt;
* For each SRL-related detector, relatively small differences in AUC were observed across racial/ethnic groups. &lt;br /&gt;
* No racial/ethnic group consistently had best-performing detectors&lt;/div&gt;</summary>
		<author><name>Joyce</name></author>
	</entry>
	<entry>
		<id>https://www.pcla.wiki/index.php?title=Latino/Latina/Latinx/Hispanic_Learners_in_North_America&amp;diff=348</id>
		<title>Latino/Latina/Latinx/Hispanic Learners in North America</title>
		<link rel="alternate" type="text/html" href="https://www.pcla.wiki/index.php?title=Latino/Latina/Latinx/Hispanic_Learners_in_North_America&amp;diff=348"/>
		<updated>2022-06-03T17:22:02Z</updated>

		<summary type="html">&lt;p&gt;Joyce: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Anderson et al. (2019) [https://www.upenn.edu/learninganalytics/ryanbaker/EDM2019_paper56.pdf pdf]&lt;br /&gt;
* Models predicting six-year college graduation&lt;br /&gt;
* False negatives rates were greater for Latino students when Decision Tree and Random Forest yielded was used&lt;br /&gt;
* White students had higher false positive rates across all models, Decision Tree, SVM, Logistic Regression, Random Forest, and SGD&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Christie et al. (2019) [https://files.eric.ed.gov/fulltext/ED599217.pdf pdf]&lt;br /&gt;
* Models predicting student's high school dropout&lt;br /&gt;
* The decision trees showed little difference in AUC among White, Black, Hispanic, Asian, American Indian and Alaska Native, and  Native Hawaiian and Pacific Islander.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Lee and Kizilcec (2020) [https://arxiv.org/pdf/2007.00088.pdf pdf]&lt;br /&gt;
* Models predicting college success (or median grade or above)&lt;br /&gt;
* Random forest algorithms performed significantly worse for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than non-URM students (White and Asian)&lt;br /&gt;
* The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values from 0.5 to group-specific values&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yu et al. (2020) [https://files.eric.ed.gov/fulltext/ED608066.pdf pdf]&lt;br /&gt;
* Model predicting undergraduate short-term (course grades) and long-term (average GPA) success&lt;br /&gt;
* Hispanic students were inaccurately predicted to perform worse for both short-term and long-term&lt;br /&gt;
* The fairness of models improved when either click or a combination of click and survey data, and not institutional data, was included in the model&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yu et al. (2021) [https://dl.acm.org/doi/pdf/10.1145/3430895.3460139 pdf]&lt;br /&gt;
* Models predicting college dropout for students in residential and fully online program&lt;br /&gt;
* Whether the socio-demographic information was included or not, the model showed worse true negative rates for students who are underrepresented minority (URM; or not White or Asian), and worse accuracy if URM students are studying in person &lt;br /&gt;
* The model showed better recall for URM students, whether they were in residential or online program&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bridgeman et al. (2009) [https://www.researchgate.net/publication/242203403_Considering_Fairness_and_Validity_in_Evaluating_Automated_Scoring page]&lt;br /&gt;
* Automated scoring models for evaluating English essays, or e-rater&lt;br /&gt;
* E-Rater gave significantly better scores than human rater for 11th grade essays written by Hispanic students and Asian-American students&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Jiang &amp;amp; Pardos (2021) [https://dl.acm.org/doi/pdf/10.1145/3461702.3462623 pdf]&lt;br /&gt;
* Predicting university course grades using LSTM&lt;br /&gt;
* Roughly equal accuracy across racial groups&lt;br /&gt;
* Slightly better accuracy (~1%) across racial groups when including race in model&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zhang et al. (in press)&lt;br /&gt;
* Detecting student use of self-regulated learning (SRL) in mathematical problem-solving process&lt;br /&gt;
* For each SRL-related detector, relatively small differences in AUC were observed across racial/ethnic groups. &lt;br /&gt;
* No racial/ethnic group consistently had best-performing detectors&lt;/div&gt;</summary>
		<author><name>Joyce</name></author>
	</entry>
	<entry>
		<id>https://www.pcla.wiki/index.php?title=Black/African-American_Learners_in_North_America&amp;diff=347</id>
		<title>Black/African-American Learners in North America</title>
		<link rel="alternate" type="text/html" href="https://www.pcla.wiki/index.php?title=Black/African-American_Learners_in_North_America&amp;diff=347"/>
		<updated>2022-06-03T17:19:56Z</updated>

		<summary type="html">&lt;p&gt;Joyce: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Kai et al. (2017) [https://www.upenn.edu/learninganalytics/ryanbaker/DLRN-eVersity.pdf pdf]&lt;br /&gt;
* Models predicting student retention in an online college program&lt;br /&gt;
* J48 decision trees achieved much lower Kappa and AUC for Black students than White students&lt;br /&gt;
* JRip decision rules achieved almost identical Kappa and AUC for Black students and White students&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hu and Rangwala (2020) [https://files.eric.ed.gov/fulltext/ED608050.pdf pdf]&lt;br /&gt;
* Models predicting if a college student will fail in a course&lt;br /&gt;
* Multiple cooperative classifier model (MCCM) model was the best at reducing bias, or discrimination against African-American students, while other models (particularly Logistic Regression and Rawlsian Fairness) performed far worse&lt;br /&gt;
* The level of bias was inconsistent across courses, with MCCM prediction showing the least bias for Psychology and the greatest bias for Computer Science&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Christie et al. (2019) [https://files.eric.ed.gov/fulltext/ED599217.pdf pdf]&lt;br /&gt;
* Models predicting student's high school dropout&lt;br /&gt;
* The decision trees showed little difference in AUC among White, Black, Hispanic, Asian, American Indian and Alaska Native, and  Native Hawaiian and Pacific Islander.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Lee and Kizilcec (2020) [https://arxiv.org/pdf/2007.00088.pdf pdf]&lt;br /&gt;
* Models predicting college success (or median grade or above)&lt;br /&gt;
* Random forest algorithms performed significantly worse for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than non-URM students (White and Asian)&lt;br /&gt;
* The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values from 0.5 to group-specific values&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yu et al. (2020) [https://files.eric.ed.gov/fulltext/ED608066.pdf pdf]&lt;br /&gt;
* Model predicting undergraduate short-term (course grades) and long-term (average GPA) success&lt;br /&gt;
* Black students were inaccurately predicted to perform worse for both short-term and long-term&lt;br /&gt;
* The fairness of models improved when either click or a combination of click and survey data, and not institutional data, was included in the model&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yu et al. (2021) [https://dl.acm.org/doi/pdf/10.1145/3430895.3460139 pdf]&lt;br /&gt;
* Models predicting college dropout for students in residential and fully online program&lt;br /&gt;
* Whether the socio-demographic information was included or not, the model showed worse true negative rates for students who are underrepresented minority (URM; or not White or Asian), and worse accuracy if URM students are studying in person &lt;br /&gt;
* The model showed better recall for URM students, whether they were in residential or online program&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ramineni &amp;amp; Williamson (2018) [https://files.eric.ed.gov/fulltext/EJ1202928.pdf pdf]&lt;br /&gt;
* Revised automated scoring engine for assessing GRE essay&lt;br /&gt;
* E-rater gave African American test-takers significantly lower scores than human raters when assessing their written responses to argument prompts&lt;br /&gt;
* The shorter essays written by African American test-takers were more likely to receive lower scores as showing weakness in content and organization&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bridgeman et al. (2009) [https://www.researchgate.net/publication/242203403_Considering_Fairness_and_Validity_in_Evaluating_Automated_Scoring pdf]&lt;br /&gt;
* Automated scoring models for evaluating English essays, or e-rater &lt;br /&gt;
* The score difference between human rater and e-rater was significantly smaller for 11th grade essays written by White and African American students&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bridgeman et al. (2012) [https://www.tandfonline.com/doi/pdf/10.1080/08957347.2012.635502 pdf]&lt;br /&gt;
* A later version of automated scoring models for evaluating English essays, or e-rater&lt;br /&gt;
* E-rater gave significantly lower score than human rater when assessing African-American students’ written responses to issue prompt in GRE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Jiang &amp;amp; Pardos (2021) [https://dl.acm.org/doi/pdf/10.1145/3461702.3462623 pdf]&lt;br /&gt;
* Predicting university course grades using LSTM&lt;br /&gt;
* Roughly equal accuracy across racial groups&lt;br /&gt;
* Slightly better accuracy (~1%) across racial groups when including race in model&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zhang et al. (in press)&lt;br /&gt;
* Detecting student use of self-regulated learning strategies (SRL) in mathematical problem-solving process&lt;br /&gt;
* For each SRL-related detector, relatively small differences in AUC were observed across racial/ethnic groups. &lt;br /&gt;
* No racial/ethnic group consistently had best-performing detectors&lt;/div&gt;</summary>
		<author><name>Joyce</name></author>
	</entry>
	<entry>
		<id>https://www.pcla.wiki/index.php?title=Black/African-American_Learners_in_North_America&amp;diff=346</id>
		<title>Black/African-American Learners in North America</title>
		<link rel="alternate" type="text/html" href="https://www.pcla.wiki/index.php?title=Black/African-American_Learners_in_North_America&amp;diff=346"/>
		<updated>2022-06-03T17:19:27Z</updated>

		<summary type="html">&lt;p&gt;Joyce: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Kai et al. (2017) [https://www.upenn.edu/learninganalytics/ryanbaker/DLRN-eVersity.pdf pdf]&lt;br /&gt;
* Models predicting student retention in an online college program&lt;br /&gt;
* J48 decision trees achieved much lower Kappa and AUC for Black students than White students&lt;br /&gt;
* JRip decision rules achieved almost identical Kappa and AUC for Black students and White students&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hu and Rangwala (2020) [https://files.eric.ed.gov/fulltext/ED608050.pdf pdf]&lt;br /&gt;
* Models predicting if a college student will fail in a course&lt;br /&gt;
* Multiple cooperative classifier model (MCCM) model was the best at reducing bias, or discrimination against African-American students, while other models (particularly Logistic Regression and Rawlsian Fairness) performed far worse&lt;br /&gt;
* The level of bias was inconsistent across courses, with MCCM prediction showing the least bias for Psychology and the greatest bias for Computer Science&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Christie et al. (2019) [https://files.eric.ed.gov/fulltext/ED599217.pdf pdf]&lt;br /&gt;
* Models predicting student's high school dropout&lt;br /&gt;
* The decision trees showed little difference in AUC among White, Black, Hispanic, Asian, American Indian and Alaska Native, and  Native Hawaiian and Pacific Islander.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Lee and Kizilcec (2020) [https://arxiv.org/pdf/2007.00088.pdf pdf]&lt;br /&gt;
* Models predicting college success (or median grade or above)&lt;br /&gt;
* Random forest algorithms performed significantly worse for underrepresented minority students (URM; American Indian, Black, Hawaiian or Pacific Islander, Hispanic, and Multicultural) than non-URM students (White and Asian)&lt;br /&gt;
* The fairness of the model, namely demographic parity and equality of opportunity, as well as its accuracy, improved after correcting the threshold values from 0.5 to group-specific values&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yu et al. (2020) [https://files.eric.ed.gov/fulltext/ED608066.pdf pdf]&lt;br /&gt;
* Model predicting undergraduate short-term (course grades) and long-term (average GPA) success&lt;br /&gt;
* Black students were inaccurately predicted to perform worse for both short-term and long-term&lt;br /&gt;
* The fairness of models improved when either click or a combination of click and survey data, and not institutional data, was included in the model&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yu et al. (2021) [https://dl.acm.org/doi/pdf/10.1145/3430895.3460139 pdf]&lt;br /&gt;
* Models predicting college dropout for students in residential and fully online program&lt;br /&gt;
* Whether the socio-demographic information was included or not, the model showed worse true negative rates for students who are underrepresented minority (URM; or not White or Asian), and worse accuracy if URM students are studying in person &lt;br /&gt;
* The model showed better recall for URM students, whether they were in residential or online program&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ramineni &amp;amp; Williamson (2018) [https://files.eric.ed.gov/fulltext/EJ1202928.pdf pdf]&lt;br /&gt;
* Revised automated scoring engine for assessing GRE essay&lt;br /&gt;
* E-rater gave African American test-takers significantly lower scores than human raters when assessing their written responses to argument prompts&lt;br /&gt;
* The shorter essays written by African American test-takers were more likely to receive lower scores as showing weakness in content and organization&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bridgeman et al. (2009) [https://www.researchgate.net/publication/242203403_Considering_Fairness_and_Validity_in_Evaluating_Automated_Scoring pdf]&lt;br /&gt;
* Automated scoring models for evaluating English essays, or e-rater &lt;br /&gt;
* The score difference between human rater and e-rater was significantly smaller for 11th grade essays written by White and African American students&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bridgeman et al. (2012) [https://www.tandfonline.com/doi/pdf/10.1080/08957347.2012.635502 pdf]&lt;br /&gt;
* A later version of automated scoring models for evaluating English essays, or e-rater&lt;br /&gt;
* E-rater gave significantly lower score than human rater when assessing African-American students’ written responses to issue prompt in GRE&lt;br /&gt;
&lt;br /&gt;
Jiang &amp;amp; Pardos (2021) [https://dl.acm.org/doi/pdf/10.1145/3461702.3462623 pdf]&lt;br /&gt;
* Predicting university course grades using LSTM&lt;br /&gt;
* Roughly equal accuracy across racial groups&lt;br /&gt;
* Slightly better accuracy (~1%) across racial groups when including race in model&lt;br /&gt;
&lt;br /&gt;
Zhang et al. (in press)&lt;br /&gt;
* Detecting student use of self-regulated learning strategies (SRL) in mathematical problem-solving process&lt;br /&gt;
* For each SRL-related detector, relatively small differences in AUC were observed across racial/ethnic groups. &lt;br /&gt;
* No racial/ethnic group consistently had best-performing detectors&lt;/div&gt;</summary>
		<author><name>Joyce</name></author>
	</entry>
	<entry>
		<id>https://www.pcla.wiki/index.php?title=Self-regulated_Learning&amp;diff=345</id>
		<title>Self-regulated Learning</title>
		<link rel="alternate" type="text/html" href="https://www.pcla.wiki/index.php?title=Self-regulated_Learning&amp;diff=345"/>
		<updated>2022-06-01T16:44:05Z</updated>

		<summary type="html">&lt;p&gt;Joyce: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Zhang et al. (in press) [pdf]&lt;br /&gt;
* Four detectors (i.e., numerical representation, contextual representation, outcome orientation, and data transformation) relating to two cognitive operations (assembling and translating) were built to detect middle school students' use of self-regulated learning in mathematical problem-solving process. &lt;br /&gt;
* Detectors were built using XGBoost with labels coded from text replays and features distilled from log data and textual responses.&lt;br /&gt;
* In each detector, relatively small differences in AUC were observed across gender and racial/ethnic groups, and no student group (either gender or racial/ethnic group) consistently had the best-performing detectors&lt;/div&gt;</summary>
		<author><name>Joyce</name></author>
	</entry>
	<entry>
		<id>https://www.pcla.wiki/index.php?title=Algorithmic_Bias_in_Education&amp;diff=344</id>
		<title>Algorithmic Bias in Education</title>
		<link rel="alternate" type="text/html" href="https://www.pcla.wiki/index.php?title=Algorithmic_Bias_in_Education&amp;diff=344"/>
		<updated>2022-06-01T15:57:54Z</updated>

		<summary type="html">&lt;p&gt;Joyce: /* By Algorithm Application */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Algorithmic Bias in Education ==&lt;br /&gt;
&lt;br /&gt;
This Wiki summarizes the current evidence surrounding Algorithmic Bias in Education:&lt;br /&gt;
which groups are impacted, and in which contexts.&lt;br /&gt;
&lt;br /&gt;
For a relatively recent review on this topic, see &lt;br /&gt;
Baker, R.S., Hawn, M.A. (in press) Algorithmic Bias in Education. To appear in &amp;lt;em&amp;gt;International Journal of Artificial Intelligence and Education&amp;lt;/em&amp;gt;&lt;br /&gt;
([https://www.upenn.edu/learninganalytics/ryanbaker/AlgorithmicBiasInEducation_rsb3.7.pdf pdf])&lt;br /&gt;
&lt;br /&gt;
== By Group Impacted ==&lt;br /&gt;
* Race and Ethnicity&lt;br /&gt;
** [[Black/African-American Learners in North America]]&lt;br /&gt;
** [[Latino/Latina/Latinx/Hispanic Learners in North America]]&lt;br /&gt;
** [[Asian/Asian-American Learners in North America]]&lt;br /&gt;
** [[White Learners in North America]]&lt;br /&gt;
** [[Indigenous Learners in North America]]&lt;br /&gt;
** [[Research on Race and Ethnicity Conducted Outside of North America]] &lt;br /&gt;
* [[Gender: Male/Female]]&lt;br /&gt;
* [[Gender: Non-Binary and Transgender Learners]]&lt;br /&gt;
* [[Sexual Orientation]]&lt;br /&gt;
* [[Linguistic Origin]]&lt;br /&gt;
* [[National Origin or National Location]]&lt;br /&gt;
* [[International Students]]&lt;br /&gt;
* [[Native Language and Dialect]]&lt;br /&gt;
* [[Learners with Disabilities]]&lt;br /&gt;
* [[Age]]&lt;br /&gt;
* [[Urbanicity]]&lt;br /&gt;
* [[Parental Educational Background]]&lt;br /&gt;
* [[Socioeconomic Status]]&lt;br /&gt;
* [[Military-Connected Status]]&lt;br /&gt;
* [[Children of Migrant Workers]]&lt;br /&gt;
* [[Religion and Religious Background]]&lt;br /&gt;
* [[Public or Private K-12 School]]&lt;br /&gt;
* [[Intersectional Research]]&lt;br /&gt;
&lt;br /&gt;
== By Algorithm Application == &lt;br /&gt;
* [[At-risk/Dropout/Stopout/Graduation Prediction]]&lt;br /&gt;
* [[Course Grade and GPA Prediction]]&lt;br /&gt;
*[[National and International Examination]]&lt;br /&gt;
* [[Short-term Performance and Learning Gains Prediction]]&lt;br /&gt;
* [[Automated Essay Scoring]]&lt;br /&gt;
* [[Speech Recognition for Education]]&lt;br /&gt;
* [[Other NLP Applications of Algorithms in Education]]&lt;br /&gt;
* [[Student Knowledge Modeling]]&lt;br /&gt;
* [[Engagement and Affect Detection]]&lt;br /&gt;
*[[Self-regulated Learning]]&lt;/div&gt;</summary>
		<author><name>Joyce</name></author>
	</entry>
	<entry>
		<id>https://www.pcla.wiki/index.php?title=Self-regulated_Learning&amp;diff=343</id>
		<title>Self-regulated Learning</title>
		<link rel="alternate" type="text/html" href="https://www.pcla.wiki/index.php?title=Self-regulated_Learning&amp;diff=343"/>
		<updated>2022-06-01T15:57:42Z</updated>

		<summary type="html">&lt;p&gt;Joyce: Created page with &amp;quot;Zhang et al. (in press) [pdf] * Four detectors (i.e., numerical representation, contextual representation, outcome orientation, and data transformation) relating to two cognitive operations (assembling and translating) were built to detect middle school students' use of self-regulated learning in mathematical problem-solving process.  * Detectors were built using XGBoost with labels coded from text replays and features distilled from log data and textual responses. * Com...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Zhang et al. (in press) [pdf]&lt;br /&gt;
* Four detectors (i.e., numerical representation, contextual representation, outcome orientation, and data transformation) relating to two cognitive operations (assembling and translating) were built to detect middle school students' use of self-regulated learning in mathematical problem-solving process. &lt;br /&gt;
* Detectors were built using XGBoost with labels coded from text replays and features distilled from log data and textual responses.&lt;br /&gt;
* Comparing the AUC across student groups, relatively small differences were observed, and no student group (either gender or racial/ethnic group) consistently had the best-performing detectors&lt;/div&gt;</summary>
		<author><name>Joyce</name></author>
	</entry>
	<entry>
		<id>https://www.pcla.wiki/index.php?title=Algorithmic_Bias_in_Education&amp;diff=342</id>
		<title>Algorithmic Bias in Education</title>
		<link rel="alternate" type="text/html" href="https://www.pcla.wiki/index.php?title=Algorithmic_Bias_in_Education&amp;diff=342"/>
		<updated>2022-06-01T15:52:28Z</updated>

		<summary type="html">&lt;p&gt;Joyce: /* By Algorithm Application */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Algorithmic Bias in Education ==&lt;br /&gt;
&lt;br /&gt;
This Wiki summarizes the current evidence surrounding Algorithmic Bias in Education:&lt;br /&gt;
which groups are impacted, and in which contexts.&lt;br /&gt;
&lt;br /&gt;
For a relatively recent review on this topic, see &lt;br /&gt;
Baker, R.S., Hawn, M.A. (in press) Algorithmic Bias in Education. To appear in &amp;lt;em&amp;gt;International Journal of Artificial Intelligence and Education&amp;lt;/em&amp;gt;&lt;br /&gt;
([https://www.upenn.edu/learninganalytics/ryanbaker/AlgorithmicBiasInEducation_rsb3.7.pdf pdf])&lt;br /&gt;
&lt;br /&gt;
== By Group Impacted ==&lt;br /&gt;
* Race and Ethnicity&lt;br /&gt;
** [[Black/African-American Learners in North America]]&lt;br /&gt;
** [[Latino/Latina/Latinx/Hispanic Learners in North America]]&lt;br /&gt;
** [[Asian/Asian-American Learners in North America]]&lt;br /&gt;
** [[White Learners in North America]]&lt;br /&gt;
** [[Indigenous Learners in North America]]&lt;br /&gt;
** [[Research on Race and Ethnicity Conducted Outside of North America]] &lt;br /&gt;
* [[Gender: Male/Female]]&lt;br /&gt;
* [[Gender: Non-Binary and Transgender Learners]]&lt;br /&gt;
* [[Sexual Orientation]]&lt;br /&gt;
* [[Linguistic Origin]]&lt;br /&gt;
* [[National Origin or National Location]]&lt;br /&gt;
* [[International Students]]&lt;br /&gt;
* [[Native Language and Dialect]]&lt;br /&gt;
* [[Learners with Disabilities]]&lt;br /&gt;
* [[Age]]&lt;br /&gt;
* [[Urbanicity]]&lt;br /&gt;
* [[Parental Educational Background]]&lt;br /&gt;
* [[Socioeconomic Status]]&lt;br /&gt;
* [[Military-Connected Status]]&lt;br /&gt;
* [[Children of Migrant Workers]]&lt;br /&gt;
* [[Religion and Religious Background]]&lt;br /&gt;
* [[Public or Private K-12 School]]&lt;br /&gt;
* [[Intersectional Research]]&lt;br /&gt;
&lt;br /&gt;
== By Algorithm Application == &lt;br /&gt;
* [[At-risk/Dropout/Stopout/Graduation Prediction]]&lt;br /&gt;
* [[Course Grade and GPA Prediction]]&lt;br /&gt;
*[[National and International Examination]]&lt;br /&gt;
* [[Short-term Performance and Learning Gains Prediction]]&lt;br /&gt;
* [[Automated Essay Scoring]]&lt;br /&gt;
* [[Speech Recognition for Education]]&lt;br /&gt;
* [[Other NLP Applications of Algorithms in Education]]&lt;br /&gt;
* [[Student Knowledge Modeling]]&lt;br /&gt;
* [[Engagement and Affect Detection]]&lt;br /&gt;
*Self-regulated Learning&lt;/div&gt;</summary>
		<author><name>Joyce</name></author>
	</entry>
</feed>