Higher Education and the Automation of Inequality through “Inattentional Blindness” : Survey Study

Published:

ABSTRACT

The accelerating incorporation and reliance on Artificial Intelligence (AI) and Data-Driven algorithms that assist us in sensitive domains such as education, social services, and the criminal justice system, have provided technology systems the power to automate inequality through algorithmic discrimination. Recent research has identified the narrow technical approaches technologists often take to solve social problems (known as “inattentional blindness”) as one of the main contributors to algorithmic bias and discrimination in AI systems. However, there is little research examining the materialization of this phenomenon in higher education. As such, we conducted a quantitative survey study among California State University Dominguez Hills(CSUDH) students in the Computer Science/Technology related fields, to investigate if this phenomenon is also observed among university students. Our investigation employed a survey questionnaire designed to quantify the “level of significance” participants attributed to statements from two distinct categories, one associated with important Social & Ethical real-world considerations of technology systems and the other one associated with Technical & Economic ones. Results demonstrated that students in the Computer Science/Technology fields attribute a greater overall significance to Technical & Economic considerations in comparison with important Social & Ethical ones. A compelling indication that “inattentional blindness” might also be a considerably persistent pattern among university students.

Download paper here