agile-hound-3135
Edited
• 3 Credit Hours
Key adjectives used by students — color intensity reflects sentiment
agile-hound-3135
Edited
tranquil-dove-2018
Edited
smooth-albatross-4565
Took this fall 2025 but had to choose summer 2025 since there's no fall 2025 option on here yet. This class is tedious busy work, repeating the same stuff in every single homework, response etc. If you have the patience and want an A and don't care about learning anything, take this class.
Edited
noble-fox-9212
Edited
simple-crow-7925
This course is easy but has very heavy workload on assignment. Assignment sometimes doesn't have very clear instructions so you need to confirm with TA and classmates.
It is an easy course but requires a lot of time.
Edited
brave-gazelle-0770
Edited
stable-koala-4903
Edited
fearless-moose-8676
Edited
lively-badger-3608
This class is easy but a waste of time. You likely won't learn anything meaningful, but note that most of the time spent will be on the assignments. They're much shorter than the typical OMSCS assignment, but they're tedious with ambiguous instructions. Everything else takes minimal effort. Take this if you want a semester off or want to pair this with a harder class.
Edited
gentle-penguin-8274
Edited
This course is a good introduction to explainability and fairness in ML/AI. It is named the same as a conference called the AAAI/ACM conference on AIES which specializes on the same topic this course teaches. There is also a journal that publishes proceedings of the conference. I mention this because it gives students insight into exactly what you're signing up for. This is not a computer ethics class nor an AI ethics class in the pure sense. The "society" part is important because it takes a sociological lens and fairness here is understood in terms of bias against social groups. So once we understand the purpose of the class the material all makes sense. I docked a star because some of the assignments felt tedious, but I acknowledge it's hard to test for knowledge without some repetitive tasks. This is a highly important subject that is understudied and rarely used in industry but definitely necessary.
Honestly took this class as a break from the intensity of other ones. Was nice to have more of a social life for a semester.
As the other reviews called out, there's a strong focus on DEI and de-biasing datasets. It's absolutely something we should learn about, but it felt like the course just kept repeating itself. I left the class wanting more to learn about AI and ethics, especially in a world dominated by LLMs now. I will say I did like learning about the different strategies for fairness though
Realistically could spend 3 - 5 hours a week and slam everything out. The hardest part was actually remembering that I had assignments to do
I took this course to pair it with KBAI this summer because I do feel that ethics are important and also a big challenge with the exponential growth of AI.
Honestly, I was very disappointed with the course content. With so many aspects of Ethics and societal impacts of AI, the course was overly/totally focused on DEI and data biases. The course content could have been covered in about three lectures and the remaining time left to cover the many other aspects of ethics in AI.
The course is composed of many small submissions. There were 22 graded submissions ranging from the syllabus quiz to the Final Exam/Project. All of them except surveys and the midterm were writing projects and you had to use Python or Excel to generate the information you needed for the reports.
Lots of reviews comment on the assignments and the work involved so I am not going to duplicate that effort other than to emphasize two things:
If you answer the questions in the rubrics, even if you are repeating yourself at different points, you will do fine. Give them what they are asking for. In the entire course I missed 5 points on the midterm exam and 2 points on one of the written assignments. This strategy works.
The final exam which is a project was the only significant challenge in the course and 90% of the challenge was finding the topic and data I wanted to use as the basis of my final project submission.
In the end though, I was able to accomplish my goal. I had an easy course for the summer to pair with a more involved course.
Note: Class Taken Summer 2025 - option not yet available.
Summary
I’m in the camp of people who genuinely feel AI ethics is an important topic which deserves serious attention. Unfortunately, this course in my opinion is both outdated (especially with the new concerns around LLMs and AI’s impact on society) and very superficial.
It also has some annoyances – mostly due to being outdated (broken links, asking for modern research when most research that’s more modern isn’t applicable to the outdated coursework) – which can make the class too annoying to say it’s a trivial ‘easy’ class.
Details
The course does a decent job introducing basically ‘how to lie with statistics’, basic data privacy, and very basic concepts around legally protected characteristics. It does a poor (but extant) job of introducing anti-biasing techniques specifically for predictive algorithms and word2vec. It does an extremely poor/superficial job of introducing some of the more complex challenges around bias in generative AI and basically nothing on how to try to address the issue. It also doesn’t even attempt to deal with the broader social or economic impacts of AI.
There are also lots of small annoyances – such as the coursework asking you to use broken links, libraries with broken pip (to the point where they suggest you try to write your own) and requesting research with data sets when most modern research is on topics where you can’t really apply class concepts.
Really, this class needs a revamp to catch up with the times, and would also benefit from increasing the depth the material was covered in – it can do that and still retain its status as a good burnout-healing class.
Would I recommend it?
If you’ve never run into these topics before and you’re looking for a semester to recover from burnout while still counting as an elective in two tracks than maybe. However, if you’ve ever had any prior interest in AI ethics, data privacy, or statistics on a practical level or you don’t want to feel you’ve ‘wasted’ one of your 10 classes on something that you really could have covered in more depth in some other way then no.
Summer 2025 review:
I work full time and had travel plans so wanted an easy course for the summer. AIES felt and was a good match for me. Lectures were good, TAs were responsive and assignments were released 2 weeks in advance so was able to work ahead for the most part. It requires basic python experience and you can pick up Pandas, matplotlib and numpy as you go.
Assignments:
The first 4 assignments were easy and straight forward. I felt the fifth one was needlessly complex and time intensive. It could have been omitted or turned into another written critique since I felt the Final Project and the 5th assignment were quite similar. Grading is lenient which was helpful. As with any course, the rubrics are the key and if we answer every point from the rubric, it mattered more than ensuring to have the correct answers.
Written critiques:
This was my favorite part of the course. I enjoyed the first one about autonomous vehicles. It made us think deeper and tie in our responses with course concepts. The second one was about a What-If Tool, I had to rephrase a lot due to page limits and also the need to fit in screenshots.
Final exam: Finding a suitable article was difficult and took more time than doing the write-up for this. Definitely recommend to start this early and work on this and the final project in parallel.
I took CS 6603 in Summer 2025. Overall, the class was easy to complete, but it didn’t offer much depth. While I think the course has potential, especially given the importance of the subject, it didn’t deliver as much value as I had hoped.
The weekly discussion questions touched on some important and interesting topics. That said, they were often framed in a very specific way, which pushed students to respond in one particular direction. This helped guide the conversation, but also limited opportunities for students to share different perspectives. Since everyone ends up making similar points, the required follow-up responses can feel repetitive. It also raises the question of whether you're replying to a classmate or just reading something generated by an LLM.
The lectures aren’t really necessary. You can get all the key information just by skimming the PowerPoint slides. The required book, Weapons of Math Destruction, is worth reading on its own. However, it isn’t truly integrated into the course. Some discussion topics touch on themes from the book, but since the questions are pre-set and structured so tightly, the book doesn’t add much to the actual coursework.
The coding assignments were straightforward and felt more like Python practice than meaningful ethics work. The instructions were extremely detailed, which made them easy to complete, but there wasn’t much thinking involved beyond following the steps and formatting everything as required. It felt like the assignments were designed more to simplify grading than to encourage exploration or creativity.
The written critiques were the most useful part of the course. Although the questions were still fairly guided, they gave more space to develop your thoughts and apply the material in a more meaningful way. This was the one area where I felt like I was engaging with the subject more directly.
One area I think the course could really improve is in connecting to current events. For example, during the term I took it, there was active debate in Congress over a proposed AI state-law moratorium, which if passed, would’ve prevented states from passing laws relating to AI for 10 years. It would have been a great opportunity to bring that into the course through a discussion topic or assignment. A little more flexibility and awareness of what’s happening in the real world would make the course more engaging and relevant.
In the end, CS 6603 fell short of what I expected from a graduate-level course on such an important topic. It lacked depth, didn’t adapt to current events, and relied too heavily on structured responses and rigid assignments. To be fair, designing and administering an ethics course at this scale is not easy. Encouraging thoughtful discussion, maintaining academic integrity, and keeping material current are all difficult in a large online setting. For students already comfortable with Python or familiar with basic AI ethics concepts, this course offers very little. Even as an easy A, the busy work and lack of meaningful engagement make it hard to recommend. As it stands, the course feels underdeveloped and brings down the overall reputation of the OMSCS program. I truly hope this course can be improved.
I finished the course with an A, achieving a 98.17%.
Background: Bachelor's degree in Computer Science from a university ranked #377 out of 436 National Universities in U.S. News English is my second language (TOEFL score: 95/120) 1 year of experience as a full-stack developer
Overall: This class is lightweight compared to other OMSCS courses. The topics covered in the lectures are easy to understand, but they prompt you to think more deeply about concepts you might otherwise overlook. I highly recommend this class.
Class Discussion/Exercises (59/60): This part is pretty straightforward—just follow the instructions and give your own opinion. Don’t use AI-generated answers. Each exercise took me only about 10–15 minutes to complete. The only point I missed was due to a misunderstanding of the question.
Written Critiques (102/105): This is like a longer version of the Class Discussion/Exercises. The instructions are very clear, and I missed three points because I answered one of the questions incorrectly.
Projects (495/400): I think it's interesting and fairly easy. Just follow the instructions, and you'll be fine, but make sure you understand what you're doing, as you'll need it for the final project.
Final Project (100/100): If you understand what you did for the projects, this will be easy. I found the dataset on Kaggle and simply applied what I learned in the projects to it.
Mid-Term Exam (71/78): I'm not the best at taking exams, but I'm happy with my score. The downside is that they don't reveal the correct answers, so you can only 'guess' whether your score is good or not.
Final Exam (100/100): I really liked how the final exam was designed—it felt practical and relevant to real-world experience. However, the most challenging part was finding materials that matched the assignment requirements. I recommend starting your search for suitable resources as soon as the assignment is released.