May/June 2023Vol.XXXV No. 4

Are MIT Faculty Serious About Addressing AI Bias?

Bernhardt L. Trout

Together with MIT students, staff, and non-MIT colleagues, I recently made an educational video on the ethics of AI bias. It was posted over a month ago on MIT’s OCW website and can be found here: https://www.youtube.com/watch?v=NgaW_p7gsRc. Given how important MIT faculty, and for that matter the whole AI community, state the issue of AI bias is, I thought it would be of help. The video is explicitly different from all of the other treatments of AI bias. It shows the limits of the technical approach, addresses foundational aspects of bias, and presents the beginning of a holistic solution. While it is a dramatization of a class (clearly with no intended connection to real people or classes), it is also meant to be funny and entertaining.

As I thought that our colleagues in EECS would be particularly interested, I sent them the link to the video, requesting feedback, and sent announcements through EECS channels to reach the broader MIT AI community. Indeed, I did get feedback of a sort. I was called in by an administrator who related that some deans had a concern about an administrative issue. Aside from that non-substantive point, one of our non-EECS colleagues gave me some helpful comments. That was it. MIT faculty are no doubt busy, but too busy to spend some time thinking about a subject that they say is very important? Perhaps my messages went to everyone’s spam except the deans’. Or perhaps MIT AI researchers are doing an extended study of the video and issues raised therein and are reserving feedback until completion. In addition to those, I have some other possible explanations.

Maybe the video is too literary for MIT faculty’s tastes (at least for STEM faculty). I grant that one would need to go below the surface to see that every detail was chosen with great care, and thus, it would likely require multiple viewings together with thinking through the issues raised, including paying close attention to each word and phrase. For example, thinking about what “You are starting to understand” (not “beginning”) means close to the end of the video in the context of a particular articulation of a solution which itself is shown to be something different from what it seems at first. And this is after the video works through the consequences of the disjunction between the mathematical and the moral, raises students’ deepest longings, and addresses the foundations of political communities. By contrast, we STEM faculty want packaged solutions, dislike ambiguity, and tend to scorn words, viewing mathematical precision as superior to the literary.

Or maybe the breadth of the message goes against the forces moving us to become more and more narrow. In academics, we get credit for advances in our specific sub-fields. This is reflected in how we view our curricula and teaching. Our educational officers, to name an overriding example, are centripetal in their approach to curricula, despite this being contrary to the needs and desires of students. And lest we forget, there is always that academic turf to protect. When I started Ethics for Engineers in 2009, it was to address ABET’s request to include ethics teaching in the engineering curriculum. I thought we should do this in a serious way, a way that necessitates thinking about these issues within the broad realm of knowledge. By all measures, the students appreciate this approach. In particular, they appreciate the broadening and deepening of their understanding of what it means to be an engineer in society and how better to think through the ethical decisions that they will need to make. However, since then and despite what is good for our students, MIT’s education has become narrower. Ironically, with the proliferation of more and more varieties of majors, minors, and other academic options, the courses and course requirements for each of these has become narrower in scope. It seems that no one remembers the vision of the Lewis Report, which after the horrendous atrocities of WWII made possible by technology, restructured MIT to broaden engineering education.

There is another possibility, likely reinforced by the other two. Maybe there is a perception among MIT faculty that we have to say certain things without believing them. As such, we address aspects of bias along only a few dimensions in our highly multi-dimensional algorithms, so we can say that we are doing something. This by no means intends to indicate that we do not genuinely believe that reducing bias is a good thing, only that it is someone else’s task to do the heavy lifting. As such, we might find it easy for us to convince ourselves quickly that checking certain boxes is good enough. But it is not good enough. It is not close to enough, as the video explains. The video, not related to any class, is meant to generate serious thought about a serious problem within the broad societal aspects which encompass bias. Does the complete lack of engagement of MIT faculty working on AI mean that they’re not really serious about addressing AI bias?