The Fault in Our Algorithms podcast hosted me for an intro to AI ethics conversation.
The topics we covered include:
➤ Why is philosophy of science very helpful to making progress in AI ethics?
Because philosophers of science have learned a lot about how social and political values influence science, and we can use these lessons to better understand the political nature of machine learning as a discipline.
➤ Why is it helpful to think of AI like we think of knives in the kitchen?
Like knives in the kitchen, if AI is used irresponsibly it can do a lot of damage, but not using it at all will be very limiting
➤Why is it helpful to think of AI in analogy to marketing?
People often ask me what the greatest AI risk is. I think it's like asking what the greatest risk of marketing is -- it depends on the product.
➤ Why is making sure that humans oversee AI important?
Because human oversight can help mitigate a broad range of risks. Human control includes asking end-users for consent, allowing them to appeal, giving people ability to intervene on the AI's actions and decisions, etc. Many potential AI harms could be avoided this way, including what is called "short-term risks" (e.g., fairness, privacy) and what is called "long-term risks" (e.g., having AI take over the world)