artificial intelligence

The film, “2001: A Space Odyssey” turns 50 this year. In honor of the anniversary, we discuss the film and how it portrayed artificial intelligence. Was Stanley Kubrick’s 1968 interpretation of what AI would look like in 2001 accurate? What does the computer, HAL, teach us about ethics and technology?

We discuss those questions and the current state of AI with our guests:

  • Christopher Kanan, Ph.D., assistant professor in the Carlson Center for Imaging Science at RIT
  • Hayley Clatterbuck, assistant professor of philosophy at the University of Rochester
  • Denis Lomakin, computer science research assistant at the University of Rochester
  • Lester D. Friedman, retired professor and former chair of the Media and Society Program at Hobart and William Smith College

Would you ride in a driverless car? GM says its driverless car could be in fleets by next year. But some polls show Americans are still skeptical about the idea of this technology. Are driverless cars safe? Who would be liable in an accident? Advocates of driverless cars say they could benefit the environment and improve everyday lifestyles.

We talk about our future as drivers…or riders. Our guests:

In July, Tesla and SpaceX CEO Elon Musk said artificial intelligence, or AI, is a "fundamental existential risk for human civilization.” Musk wasn’t alone in sharing those concerns, leading many people to ask what will happen when humans develop super intelligent AI. As AI continues to advance, it raises questions about the job sector (Will it eliminate jobs or create them?), the education system (Could robots eventually replace teachers?), human safety (Could AI systems outsmart us and lead to our demise?), and more.

This hour, our panel of experts helps us understand AI and its implications. In studio:

  • Henry Kautz, director of the Institute for Data Science at the University of Rochester
  • Dhireesha Kudithipudi, professor and chair of the graduate program in computer engineering at RIT
  • Matt Huenerfauth, professor of information sciences and technologies at RIT

Note: the recording of this program was corrupted, so we are unable to provide a replay of this show.    

Are human beings about to unleash a scientific nightmare? Right now, researchers are trying to create artificial superintelligence. It's a kind of machine that is more intelligent than a human being. But imagine a machine that's one thousand times smarter than the smartest human. Would that machine continue to carry out programmed tasks, or could that machine gain autonomy? Would we have something to fear if it did? Author James Barrat believes ASI is the great risk to our future, and he explains in this hour why he wrote the book Our Final Invention.