Skip to main content

To round out our artificial intelligence (AI) series that first explained how AI works then debated AI’s role in health care, we discuss the regulation of AI and how companies can create ethical AI.

Regulation has struggled to play catch up with technology, and its relationship with artificial intelligence (AI) is no different. Some congress members have tried to stay apace with the technology. Senators Ron Wyden (D-OR), Corey Booker (D-NJ), and Representative Yvette Clarke (D-NY) introduced the Algorithmic Accountability Act this year, which would, under the Federal Trade Commission’s authority, require companies to perform “impact assessments” on their own algorithmic decision-making systems. This bill joins others targeting AI, such as Representative John Delaney (D-MD) and Senator Maria Cantwell’s (D-WA) FUTURE of Artificial Intelligence Act of 2017 and Senator Brian Schatz’s (D-HI) AI in Government Act of 2018.

Since none of these bills have yet been signed into law, the onus for now is on companies to self-regulate and define best practices for ethical AI. Some, however, have argued self-regulation will not work, underscoring the importance of government and public involvement. Yoshua Bengio, AI researcher who this year earned the prestigious Turing Award for computing, said: “Do you think that voluntary taxation works? It doesn’t. Companies that follow ethical guidelines would be disadvantaged with respect to the companies that do not. It’s like driving. Whether it’s on the left or the right side, everybody needs to drive in the same way; otherwise, we’re in trouble.”

Yochai Benkler, co-director of the Berkman Klein Center for Internet at Society at Harvard University, agrees with Bengio that industry cannot direct the safeguards of AI. He said governments should instead demand that companies share data in protected databases with access granted to insulated, publicly-funded researchers. He also called for the limitation of industry participation in policy panels, public funding of organizations working to ensure that AI is fair and beneficial, and public investment in independent research.

According to TechCrunch, Bengio claims the solution is open and structured discussion, followed by strong, clear regulation enacted internationally. He recommends December’s Montreal Declaration as a starting point for companies to follow since it is the first initiative to involve the public and a broad range of researchers, including those in the humanities and social sciences. A set of ten principles geared towards developing an ethical framework for the deployment of AI, the Declaration calls for well-being, respect for autonomy, protection of privacy and intimacy, solidarity, democratic participation, equity, diversity inclusion, prudence, responsibility, and sustainable development.

After convening with 52 experts, the European Union (EU) also released its own Ethics Guidelines of Trustworthy AI in April.  The guidelines include seven requirements the EU thinks future AI systems should meet, including: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and accountability.  

While the EU and the Declaration provide principles to acknowledge bias issues, the AI Now Institute, a research institute examining the social implications of artificial intelligence, provides four action-based recommendations to address bias and discrimination. First, the Institute says remedying bias is impossible without transparency, which begins with tracking and publicizing where AI systems are used. Second, rigorous testing should be required across the lifecycle of AI systems, including pre-release trials, independent auditing and ongoing monitoring. Third, technical debiasing requires the input of experts across disciplines, such as the social sciences and humanities. Fourth, the methods for addressing bias and discrimination in AI need to expand to include assessments of whether certain systems should be designed at all, based on a thorough risk assessment.

As Softheon’s data scientists apply AI to fraud detection, payment predictions, and social determinants of health, they, too, grapple with the proper collection, handling and use of data. Data scientists Millicent Mulieri, Sam Zurl and Rachel Wang all agree one of the first steps is to acknowledge the data’s assumptions. Mulieri, who worked on a model that was able to use social determinants to predict whether a member would make their initial binder payment and help at-risk members, said, “Whatever the results the model gives, it’s the scientist’s job to interpret it well.”

This human check is even more important when the algorithm is in the business of helping others. She continued, “I believe that AI is an incredibly valuable tool for better understanding our members and their needs. The more data we receive from our issuers, the better, more accurate models we can build, and the more we can learn about the community we serve.”

The views and opinions expressed by the authors on this blog website and those providing comments are theirs alone, and do not reflect the opinions of Softheon. Please direct any questions or comments to



Leave a Reply