AI, Ethics, and Society

Course Website

Average Workload

6.0 hrs/wk

Average Difficulty

1.6 /5

Average Overall

2.6 /5
CS-6603
AI, Ethics, and Society
Taken Summer 2024
Reviewed on 8/5/2024

Verified GT Email

Workload: 11.5 hr/wk
Difficulty: Easy
Overall: Neutral

Introduction

Background

As of course start, I had around 3.5 years of experience working as a professional software engineer, specifically doing web applications (full-stack .NET + JavaScript). My previous degree was in Engineering (non-CE/non-EE) from early 2010s.

This was my fifth/sixth course in OMSCS (taken concurrently with DM, MGT 6311), within the computing systems specialization. I previously completed GIOS (CS 6200, Fall 2021), IIS (CS 6035, Fall 2022), CN (CS 6250, Summer 2023), and HPCA (CS 6290, Fall 2023).

High-Level Review

Overall, I enjoyed the course. While topical coverage was somewhat surface-level, I thought it covered a nice breadth of topics across the AI/ML landscape, which added some thought-provoking ideas around the subject matter within the general scope of ethics (which is often neglected or otherwise underemphasized in STEM in my opinion). Otherwise, even with some logistical hiccups notwithstanding, I thought the administration of the course and content curation was solid overall (though not flawlessly so, either).

Course Logistics and Time Expenditures

The course is not curved, and follows a strict 10-point scale (i.e., 90.000-100.000% overall for an A, 80-89.999% overall for a B, etc.). The relative weighting of the deliverables is as follows:

  • 40% projects (five, equally weighted)
  • 15% final project
  • 15% class discussions/exercises
  • 10% written critiques
  • 10% midterm exam (closed notes, timed)
  • 10% final exam (open notes, untimed)

I did not keep strict tabs on time expenditures across deliverables, but my best in-hindsight back-estimates are as follows:

  • 1 hour per discussion * 6 discussions total = 6 hours
  • 0.5 hours per exercise * 6 exercises total = 3 hours
  • 15 hours per project (mid-range average across the six projects, including final) * 6 projects = 90 hours
  • 3 hours per written critique * 2 written critiques total = 6 hours
  • 2.5 hours per lecture module * 4 lecture modules total = 10 hours
  • 6 hours of prep per exam (lessons review) * 2 exams = 12 hours

Given an 11-week summer semester, this averages out to 11.5 hours/week [= (6 + 3 + 90 + 6 + 10 + 12) / 11].

The cadence was typically 1-2 discussions/exercises assignments per week. Otherwise, the projects, written critiques, and exams were relatively evenly distributed across the semester in terms of deadlines. Additionally, it was generally possible to work ahead, on average around 2-3 weeks or so (including multiple projects available simultaneously), though I mostly stuck to the weekly schedule myself, so I'm not exactly sure what that "lookahead window" looked like more precisely in practice.

My general impression of the workload was "steady churn" rather than "intermittent boluses."

Course Deliverables

Weekly Assignments and Written Critiques

The discussions and exercises were my least favorite component of the course. It was pretty easy and mostly a matter of "checking boxes," but felt somewhat tedious nonetheless. Some of the articles examined were inherently interesting, to be fair, but among other things, having to comment on two other students' discussion posts to me felt more like "doing work for the sake of doing it" rather than an "added value" per se. That said, there are tougher ways to earn points, and so it really just boiled down to getting it done in a timely manner.

Along these lines, the written critiques were essentially just a slightly more involved discussion (and with more specific formatting), but not an overly imposing deliverable to complete, either.

Projects

I personally enjoyed the projects overall. I thought they covered a nice range/scope of topics, and was a good opportunity to get better acquainted with the "tools of the trade" (i.e., Python and related data-analysis-oriented libraries). There was also a report writing component for all of the projects, requiring Joyner Documentation Format (JDF), but it was pretty easy to implement if using Overleaf (accessible with your GT credentials); if familiar with using markdown, then doing inline LaTeX via Overleaf was not much different from that.

Additionally, the final project gave the option to work in a group or alone; I elected the latter. The scope/complexity of the project was on par with the other projects, so I personally didn't think that adding more "noise channels" to do somewhat linear/non-parallel data analysis would be net beneficial; I do contend that hindsight vindicated that assumption on completion/submission of the final project accordingly.

For the most part, the projects were pretty straightforward. There were some slight ambiguities here and there, but nevertheless I didn't have any major issues/blockers (as validated by the resulting grade) by simply "doing something reasonable" according to what was asked, and moving on from there; it seemed to me that others in the course had a lot of issues grasping this concept, for whatever reason. I basically spent around 2-3 "working sessions" apiece on the projects, first just getting acquainted with the data and doing subsequent data analysis on a step-wise basis through the prompts, and then separately reviewing the "raw analysis" and consolidating that into the actual formatted report for submission. I predominantly did the data analysis with Jupyter Notebooks, though the staff was not "opinionated" in terms of specifically dictating tools (other than generally requiring JDF formatting for the submitted project reports).

All of that said, there were folks wo relied mostly on Excel to do these projects (based on what was reported in Ed, etc.), which in my opinion was a self-imposed choice to avoid learning a useful skill (i.e., Python, pandas, matplotlib, etc.); but to each their own. For me, even with Python and data analysis not by my "main wheelhouse," I still found it to be a useful opportunity to develop/refine those skills nevertheless. (But I also do still think that it's somewhat disingenuous to criticize the course on that basis if somebody otherwise made a conscious decision to take a more expedient route, too, by actively avoiding the opportunity practice using these tools with a relatively well defined prompt and dataset as provided.)

Exams

The midterm was closed notes and timed, but for the most part just boiled down to having a general intuition around the core concepts from the lectures. The midterm was proctored via Honorlock in Canvas.

The final was open notes and essentially a pseudo-"seventh project," with similar scope and complexity as the other previous projects (though requiring less explicit data work/analysis), as well as the deliverable itself being a JDF report (i.e., rather than a timed/proctored Canvas submission).

Closing Thoughts

I do think the course catches some undue criticism, at least in certain regards. There was a lot of clamoring around "ambiguous instructions," but honestly from what I observed (i.e., Ed and Discord), there was a lot of "overanalyzing/second-guessing requirements," as well as a demonstrable failure to assess the provided material critically (including additional FAQs provided as needed), e.g., instruction says to plot in a certain way, and yet still questions on how it's supposed to be plotted; and so on. That's not to say that the instructions were stellar by any means, but also by no means completely lacking in clarity, either (or at least I've seen worse in OMS to date elsewhere, personally). For the most part, just do something reasonable that addresses what's been asked; it's really no more complicated than that. I can understand some unease around it for the first couple of assignments/projects while still pending grades, etc. in order to get a better gauge, but these types of matters/questions were still ongoing/persistent late into the semester from what I saw (i.e., well after grading was "battle-tested" by that point). However, the actual grading was not overly imposing (and regrade policy was relatively generous), and for the most part, if you did what was asked, then that was generally sufficient to score high marks (100% in most cases).

Additionally, I've also seen the (in my view) undue criticism of "it's too easy." I do think that this course in particular is very much so a "choose your own adventure" ordeal in terms of how deeply (or not) you decide to dive into the topics and tools. For me, being rusty with Python/pandas etc. (and particularly since I'm in the systems specialization and more generally applications development focused in my personal and professional work, as opposed to being more entrenched in the data domain), I thought the assignments were a good opportunity to gain more skills and explore some interesting topics in that vein, without being overly imposing from a time requirement standpoint. Otherwise, if you're not interested in ethics (and how it fits into AI/ML), then I'm not sure what the point is of taking a course on the topic, only to complain about it later... (That's not to say that criticizing administrative issues is off limits by any means; but even on that front, too, I did think some of the criticism was overblown nevertheless.)

This is a pretty light course overall and should be amenable to pairing with another, even over the summer (AIES and DM paired together was less work than single courses individually that I had taken previously, e.g., GIOS and HPCA).