This week, OpenAI announced a major shift: its AI models will no longer avoid controversial topics, but instead, engage with them under a policy of “intellectual freedom.” The company says ChatGPT should “seek the truth together” with users, rather than refuse to answer sensitive questions.
For years, AI companies have controlled what their models say, deciding what’s harmful, what’s neutral, and whether AI should ever take a stance. Now, OpenAI claims that AI should not lie, omit context, or push a moral position.
But what does that actually look like in practice? Will AI become more open and informative, or will this change create new challenges in how we interact with these models?
Guest host Kseniya Kalaur explores it all with our guests:
- Jeffrey Allan, Ph.D., director of the Institute for Responsible Technology and assistant professor in the School of Business and Leadership at Nazareth University
- Mona Seghatoleslami, music director and afternoon host on WXXI Classical