Carnegie Mellon University receives $10 million from a law firm to setup a research center exploring Artificial Intelligence ethics.
As Artificial Intelligence (AI) applications are becoming more interwoven in most emerging technologies, the question of ethics is not far behind. Numerous institutions have been formed to tackle the issue of “best practices,” and they all claim that they are doing what is best for the future of humanity.
Carnegie Mellon University is the latest institution to take up the question of artificial intelligence ethics after receiving a $10 million “gift” from global law firm K&L Gates LLP.
Now, you may be wondering what a group of some 2,000 lawyers would know about ethics; however, K&L Gates has assured that “most of the gift will be used to support the university in furthering scientific and scholarly research and education about the ethical and policy issues that arise from advances in artificial intelligence and other computational technologies.”
Even though the $10 million is being labeled as a gift, it does appear that K&L expects some kickback perks with influences over policy making. Part of the $10 million will go towards “a biennial international conference to share new research and scholarship, as well as offer a forum for academics and policymakers on critical issues and educate the public.”
What was K&L Gates’s motivation in a gift of $10 million? To whom does the research benefit? Is it really for the common good, or will it make certain stakeholders rich while their policies become ever more powerful and influential — enough even to become the future legal experts and authority on AI? It’s an interesting market to potentially tap into.
This sounds very much what the Partnership on AI (between Amazon, Google, DeepMind, Facebook, Microsoft, and IBM) is trying to achieve in terms of being an ultimate authority on both the ethical and technological aspects of AI.
Read More: Partnership on AI vs OpenAI: Consolidation of Power vs Open Source
Research into AI is divided among those who wish to make it open source and those who wish to consolidate power and findings in order “to educate” the public on “best practices,” which will later be used for policy making.
In other words, groups like OpenAI want all research to be open and available for public discussion while groups like the Partnership on AI and now K&L Gates are looking to setup private discussion, only to release information to the public after the research is already concluded with implementation in policy making.
“Our future will also be influenced strongly by how humans interact with technology, how we foresee and respond to the unintended consequences of our work, and how we ensure that technology is used to benefit humanity, individually and as a society,” said Carnegie Mellon president Subra Suresh.
All organizations publicly researching artificial intelligence claim to be working for the benefit of society. The concerning issue is what exactly their methods are and to whom do they ultimately owe allegiance?
If a group of almost 2,000 lawyers across five continents is putting up millions of dollars for research into artificial intelligence ethics, is this reason for suspicion?