ALG Blog 2: How GenAI Works
Published on:
Case Study reading:
How Generative AI Works and How It Fails —
Summary: Creating an effective chatbot requires generative text, pre-trained models, and a transformer neural network. With those elements, a chatbot can be created by programming and fine-tuning a bot to be able to follow a user’s instructions.
Learning
Spend at least an hour using a chatbot for learning. Here are the guidelines: You can go in depth into one topic or pick a few different topics. But don’t scatter yourself too thin. Use a state-of-the-art chatbot. Pick topics that you have actually been planning to learn, so that you have a stake in the outcome. The chatbot should not be your sole resource and you might want to have a process in place for verifying its outputs. You might find it helpful to peruse strategies for how to use chatbots for learning, such as those described in Mollick and Mollick (2023), listed in the Bibliography. Reflect on your experience and discuss it with your peers. What worked well, and what didn’t? Do you plan to continue to use chatbots for learning?
My Response: I’ve recently been interested in studying Bossa Nova style guitar playing, and I have some experience with playing that style so I thought it might be interesting to see what ChatGPT could try and teach me. Unsurprisingly, it wasn’t a very productive hour. ChatGPT is not very good at teaching or understanding music. Since music is a fairly abstract concept, I understand that it might be hard to train a model to analyze rhythms and progressions, but I figured most models were built to recognize patterns. It really had no idea how to teach Bossa Nova guitar, instead just regurgitating some mediocre advice. Overall, I’ll stick to getting help from real people, or books written by professionals. I think there are some topics that a chatbot could definitely excel at teaching, I just don’t think the arts is one of those.
The use of creative work for training
Generative AI is built using the creative output of journalists, writers, photographers, artists, and others — generally without consent, credit, or compensation. Discuss the ethics of this practice. How can those who want to change the system go about doing so? Can the market solve the problem, such as through licensing agreements between publishers and AI companies? What about copyright law — either interpreting existing law or by updating it? What other policy interventions might be helpful?
My Response: For an artist like myself, AI has become an ominous future. More and more people are discarding real artists for AI slop. AI threatens the futures of many artists, musicians, and graphic designers. I think it’s terrible that an artist’s work can be stolen and fed to GenAI without any consequences. I don’t know if there are any good solutions. The best solution I could think of is a Spotify type licensing deal, where artists gain compensation for every time their art is used when generating a prompt. Because of how new this avenue is, I doubt legislation or copyright law will catch up in time before it get’s really catastrophic.
Next-word prediction
Discuss why large language models trained to accurately predict the next word in a sequence of words end up exhibiting a range of other capabilities. Read some of the relevant research on this topic and try to summarize it in an intuitively accessible way. As a starting point, see Wei et al. (2022) and Schaeffer et al. (2023), each listed in the Bibliography.
My Response: If a model is trained based on all sorts of texts, then there are a wide range of texts for all sorts of different contexts. If a large language model, as the name implies, is allowed access to a large, and I mean LARGE, amount of data, there should be enough information to pull from that it could answer virtually any question/inquiry posed. If it consumes texts that answer questions, simple or complex, it should theoretically have a good base to decide what the next word in an answer is.
Environmental impact
Discuss the environmental impact of generative AI. Start with the current impact. Keep in mind that in addition to energy, data centers require water, land, metals and minerals. Consider both the global impact (e.g. the impact of energy consumption on climate) and local impacts (water use, environmental degradation of mining sites). See Hampton et al. (2024), Monserrate (2022), Masley (2025), and Ritchie (2025) (each listed in the Bibliography) as starting points. How might this change in the future? There are many unknowns — rate of AI adoption, scaling trends, energy efficiency, resource depletion, regulation, and more. Come up with a few scenarios. What do you think about the argument that AI use will substitute for activities that use even more energy, just as videoconferencing technology sometimes substitutes for air travel? What about the argument that advancing AI will help solve environmental problems, such as through better disaster prevention or making traffic routes more efficient? What, if anything, is distinct about AI’s environmental impact compared to computing in general or other specific digital technologies with a large energy use such as cryptocurrency? How should policymakers respond to the environmental impacts of AI?
My Response: For many, including myself, the environmental impact of AI is a big reason as to why people are put off by using AI. The amount of water it takes to generate one prompt is astronomical, and we’re already starting to see the economic impacts of this. There’s also concerns of all the precious minerals and other materials needed to create data centers big enough for the enormous amount of CPU-power required. There’s also the ethical concerns over peoples subjected to the undesirable aspects of running, maintaining, and improving AI models. As AI begins to bleed itself into more aspects of life, I see that things may start to steadily decline. We’re already seeing the way that AI has affected hiring through AI bots that skim and “read” applications. I think AI is like any tool, how it’s used is what’s ultimately important. And unfortunately, how AI has already started to be used is depraved and inhumane. If policymakers can curb this with legislation then I see a hopeful future for AI, otherwise it could ultimately be the downfall of intelligent human society.
My New Discussion Question:
Psychological impact: Generative AI has a lot of potential damaging and harmful uses. What do you think the potential psychological effects on society will be as GenAI continues to improve? How should policymakers respond to these harmful uses of AI? How can you see GenAI affecting you personally?
Why? I don’t think people are talking about the negative aspects of AI usage. Aside from AI art and music that threatens the livelihood of smaller artists, AI has been used for a lot of deplorable and depraved things in just the last few years of its existence. I think it’s important to consider the potential damages this can cause to individuals, families, businesses, and society as a whole.
Reflection: This case study was eye-opening for me. I never really understood GenAI, I simply accepted it as a fact, so to see the history of its development and understand how it works is really enlightening. Learning about the poor people that have to sift through all that toxic filth just to train models to be commercially viable is just terrible. I’m still not a big fan of using GenAI and I think i’ll continue to distance myself from it after reading this case study.