The impact of AI on computer science education

Yesterday I highlighted a study that found that AI and ML, and the expectations around them, are actually causing people to need to work harder and more, instead of less. Today, I have another study for you, this time focusing a more long-term issue: when you use something like ChatGPT to troubleshoot and fix a bug, are you actually learning anything? A professor at MIT divided a group of students into three, and gave them a programming task in a language they did not know (FORTRAN).

One group was allowed to use ChatGPT to solve the problem, the second group was told to use Meta’s Code Llama large language model (LLM), and the third group could only use Google. The group that used ChatGPT, predictably, solved the problem quickest, while it took the second group longer to solve it. It took the group using Google even longer, because they had to break the task down into components.

Then, the students were tested on how they solved the problem from memory, and the tables turned. The ChatGPT group “remembered nothing, and they all failed,” recalled Klopfer, a professor and director of the MIT Scheller Teacher Education Program and The Education Arcade.

Meanwhile, half of the Code Llama group passed the test. The group that used Google? Every student passed.

↫ Esther Shein at ACM

I find this an interesting result, but at the same time, not a very surprising one. It reminds me a lot of that when I went to high school, I was part of the first generation whose math and algebra courses were built around using a graphic calculator. Despite being able to solve and graph complex equations with ease thanks to our TI-83, we were, of course, still told to include our “work”, the steps taken to get from the question to the answer, instead of only writing down the answer itself. Since I was quite good “at computers”, and even managed to do some very limited programming on the TI-83, it was an absolute breeze for me to hit some buttons and get the right answers – but since I knew, and know, absolutely nothing about math, I couldn’t for the life of me explain how I got to the answers.

Using ChatGPT to fix your programming problem feels like a very similar thing. Sure, ChatGPT can spit out a workable solution for you, but since you aren’t aware of the steps between problem and solution, you aren’t actually learning anything. By using ChatGPT, you’re not actually learning how to program or how to improve your skills – you’re just hitting the right buttons on a graphing calculator and writing down what’s on the screen, without understanding why or how.

I can totally see how using ChatGPT for boring boilerplate code you’ve written a million times over, or to point you in the right direction while still coming up with your own solution to a problem, can be a good and helpful thing. I’m just worried about a degradation in skill level and code quality, and how society will, at some point, pay the price for that.

15 Comments

  1. 2024-08-01 8:54 am
  2. 2024-08-01 9:36 am
  3. 2024-08-01 9:48 am
    • 2024-08-01 10:41 am
  4. 2024-08-01 10:13 am
  5. 2024-08-01 10:44 am
    • 2024-08-01 11:36 am
      • 2024-08-01 4:20 pm
        • 2024-08-01 4:59 pm
  6. 2024-08-01 7:57 pm
    • 2024-08-01 8:56 pm
      • 2024-08-02 8:25 am
        • 2024-08-02 12:36 pm
  7. 2024-08-01 8:07 pm
  8. 2024-08-01 9:02 pm