A broad perspective on AI regulation: Why AI regulation is more than AI-specific laws (such as the EU AI Act) and why it's already here.
➤ People often think that AI regulation is something to worry about in the future, if and when AI-specific legislation comes into effect. This is a misconception.
Here are three reasons why.
➤ AI regulation is more than AI-specific laws
When thinking of AI regulation, many think about regulation that is specific to AI, like the EU AI Act or the US AI Accountability Act, which haven't passed yet.
However, AI is also subject to laws that are not specific to AI and are already in effect. For example:
---> Privacy laws:
E.g., Clearview, which used masses of images without consent, has been substantively restricted and accrued millions of Euros in fines for privacy violations in countries all over the world, including Australia, Canada, Greece, Italy, and the US. (links in the comments)
---> Anti-discrimination laws:
E.g., the US Department of Justice sued Meta for violating the Fair Housing Act. The DOJ argued that Facebook’s housing ads are discriminatory. The parties reached a settlement that requires Meta to change its housing ads algorithm. (link in the comments)
➤ AI-specific regulation is already here
While some of the major AI legislation efforts are still ongoing, others are already in effect or will be in effect soon.
Some examples (links in the comments):
---> China passed a law that regulates AI algorithmic recommendation services (already in effect)
---> The US State of Maine passed a law to restrict the use of facial recognition by government authorities.
---> New York City passed a law to prohibit employers from using AI for recruiting, hiring, and promoting without undergoing a bias audit (will come into effect on Jan. 2, 2023)
➤ Even without the regulation that is already in effect, the time to think about making AI responsible is now.
It is clear that AI-specific regulation is coming soon and that non-AI-specific regulation will increasingly come into play in litigation. The longer companies wait before increasing their AI ethics maturity, the more expansive becoming compliant in the future will be.
Not to mention that the longer companies wait before increasing their AI ethics maturity, the greater the risk of causing harm to people, society, and the environment.
➤ See the discussion on LinkedIn here