On March 26 Google formed a council responsible for considering ethical issues surrounding artificial intelligence projects. A slew of personal attacks against members of the council is believed to be the main reason for shutting down just over a week later.
On April 4, Google gave notice that the Advanced Technology External Advisory Council would be dissolved. One member was attacked over alleged statements about gender identity, while another was accused of trying to get Google back to work on military projects.
Google's official statement on the matter reads as follows.
It’s become clear that in the current environment, ATEAC can’t function as we wanted. So we’re ending the council and going back to the drawing board. We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics.
Gaining external opinions on internal AI work was meant to be a check on questionable developments by Google's software employees. Instead, it turned out to be a company liability due to questions raised over who the council members were.
The quick demise began when Alessandro Acquisti, a Canegie Mellon University professor of Information Technology and Public Policy, publicly resigned. He was formerly listed as the first member of the group in documents that have since been pulled from Google's site.
I'd like to share that I've declined the invitation to the ATEAC council. While I'm devoted to research grappling with key ethical issues of fairness, rights & inclusion in AI, I don't believe this is the right forum for me to engage in this important work. I thank (1/2)— alessandro acquisti (@ssnstudy) March 30, 2019
Perhaps the debate over the credibility of the members on the council will be put aside so that Google can form a real ethics council going forward. Just because Google can make many innovative AI technologies come to life, that does not mean they necessarily should.
Looking ahead, there will need to be some kind of monitoring of AI development to ensure that it is being used for the right purposes. Internal rules and regulations may work well for some projects, but history shows there is always that one person willing to cross the boundaries of what is socially acceptable.