top of page
Ravit banner.jpg

RESOURCES

Why OpenAI's "AI safety" document is unacceptable

OpenAI recently released a document detailing its AI safety approach. Two problems with it and a call to action. Let's demand more from them!


➤ BACKGROUND


👉OpenAI recently published its AI safety approach.


👉 The main topics they cover are learning from real-world examples, privacy, protecting children, and accuracy.



➤ PROBLEM 1 -- They ignore major risks.


Examples:


😞Plagiarism/fraud protection - ChatGPT is shaking whole sectors to the core by making fraud much easier. Universities are at a loss, for example. What is OpenAI doing about that?


😞Job displacement - ChatGPT threatens the jobs of millions. What is OpenAI doing about that?


😞Carbon emissions - the GPT technology is notorious for its sky-high carbon emissions. What is OpenAI doing about that?



➤ PROBLEM 2 -- Responsibility shifting


😞 They hint that their safety efforts may be insufficient, but call for regulation to ensure that no one "cuts corners" instead of holding themselves to a higher standard.


😞 For example, they say:


"While we waited over 6 months to deploy GPT-4 in order to better understand its capabilities, benefits, and risks, it may sometimes be necessary to take longer than that to improve AI systems' safety. Therefore, policymakers and AI providers will need to ensure that AI development and deployment is governed effectively at a global scale, so no one cuts corners to get ahead. "


😞No. If the risks take longer to understand and address, hold off on the release.



➤ CALL TO ACTION


❗️Car manufacturers make seatbelts. Architects make emergency exits.


❗️OpenAI should do much more about its product's risks.


❗️Let's demand more from them!


❗️How? Less starry-eyed hype about ChatGPT. More attention to what OpenAI should do to keep us safe.


❗️Wishy washy ethics statements are unacceptable.


FOR UPDATES

Join my newsletter for tech ethics resources.

I will never use your email for anything else.

bottom of page