How can AI cause harm, and which harms are the most important? In particular, which AI risks are end-users most concerned about? Results from a survey I ran last week.
➤ I am involved in academic research to highlight the needs and wants of AI-impacted communities. Currently, I am focusing on AI-related risks posed by e-commerce platforms, such as Amazon and Alibaba.
➤ Last week, I posted a draft of a survey on LinkedIn. I will be using the feedback I got to improve the survey I will use in my research.
➤ Here are the results:
Respondents cared the most about automated decisions, data rights is a close second, and general harms was last:
---> Automated decisions (4.7/5) – respondents have a strong presence for notification, explanation, and consent to automated decisions. The desire for all of these was about the same.
---> Data rights (4.6/5) – respondents have a strong preference for the ability to consent to data collection, access their data, revises inaccuracies, and erase it. The desire for all of these was about the same.
----> Harms (4/5) – respondents cared most about harms due to data breach (4.7), then risk of discrimination (4.1), then general risks to end-users (3.97), platform abuse (3.8), and, non-users (3.63).
➤ Demographics: 30 respondents, majority of men, majority of North Americans.
➤ I already have many ideas on things to change following the wonderful feedback I got in the last few days. Additional comments are very welcome!
➤ You can read the discussion on LinkedIn here