This Content Is Only For Subscribers
The recent uproar surrounding language processing tools like ChatGPT has caused numerous organizations to rush to produce recommendations for the responsible utilization of their products. For example, the online publication platform Medium has issued a statement on the topic of AI-generated writing that encourages “transparency” and “disclosure.”
My own organization has created a page of frequently asked questions (FAQs) about generative AI. On this website, it encourages educators to make “wise and ethical use” of AI and chatbots.
In light of this week’s release of the more powerful GPT-4, which runs the risk of being a misinformation and propaganda machine, these ethical measures seem rather antiquated. OpenAI asserts that GPT-4 was able to pass a simulated bar exam in the top 10%, in contrast to GPT-3.5, which only earned in the bottom 10% of the candidates.
Innovation that is not controlled
The strength behind ChatGPT comes from a supercomputer and an advanced cloud computing platform, both of which were developed with financial backing from Microsoft. This partnership between Microsoft and OpenAI will speed up the worldwide distribution of generative AI products by utilizing the Azure cloud platform from Microsoft.
It may be a coincidence, but the release of GPT-4 occurred less than two months after Microsoft disbanded a team working on ethics and societal issues. Members of the team who were frustrated by the decision stated that it was made due to pressure from the C-suite of Microsoft, which emphasized the need to bring AI products “into the hands of customers at a very high speed.”
It’s possible that “move fast and break things,” the motto that was once mocked in Silicon Valley, will become popular again.
For the time being, Microsoft will continue to operate its Office of Responsible AI. But as this high-stakes game of uncontrolled innovation continues to rage on, it seems appropriate to inquire about what exactly is meant by the term “responsible innovation.”
Responsible creativity
After I inquired about the definition of responsible innovation on ChatGPT, I received the following response: “The process of developing and implementing new technologies, processes, or products in a way that addresses ethical, social, and environmental concerns.” It entails taking into consideration the possible effects and risks that innovation may have on a variety of stakeholders, such as customers, employees, neighborhoods, and the environment.
The description provided by ChatGPT is correct, but it lacks any context. Whose ideals are these, and how are we going about putting them into action? To put it another way, who is accountable for innovation that is responsible?
During the course of the last decade, a variety of organizations, including corporations, think tanks, and academic establishments, have established responsible innovation initiatives with the goals of predicting and mitigating the adverse effects of the progression of technology.
In 2018, Google established a squad dedicated to responsible innovation with the intention of utilizing “experts in ethics, human rights, user research, and racial justice.” The most significant contribution that this group has made to Google is the creation of accountable AI principles. However, beyond this particular issue, the company’s ethical character can be called into question.
Concerns have been raised about Google’s ability to self-police in light of the company’s collaboration with the United States military as well as its treatment of two former employees who were committed to ethical principles.
In point of fact, the efforts of Google’s own employees at the grass-roots level have been the company’s most significant contribution to responsible innovation. This indicates that accountable innovation may require a grass-roots approach to its development. However, in a time when there have been widespread layoffs in the technology business, this is an ambitious goal.
Ethics-washing
According to the Code of Ethics and Professional Conduct developed by the Association for Computing Machinery (ACM), individuals working in the field of information technology have a responsibility to ensure that their innovations benefit society. But what motivates those who work in the tech industry to be “good” if they do not receive support from their supervisors, guidance from ethics experts, or regulation from government agencies? When it comes to self-auditing, can we put our faith in the computer industry?
Another problem that can arise with self-auditing is something known as “ethics washing,” which occurs when businesses only give ethics verbal service. The efforts that Meta has made toward responsible innovation are an excellent example of this.
In June of 2021, the top executive in charge of product design at Meta lauded the responsible innovation team, which she had helped establish in 2018. She also lauded Meta’s “commitment to making the most ethically responsible decisions possible, every day.” By September 2022, her squad was no longer functioning as a unit.
At the moment, the Meta store employs responsible innovation as a catchphrase for their advertising campaigns. The Responsible AI team at Meta was also disbanded in 2021 and its members were absorbed into the Social Impact group, which assists charitable organizations in making use of Meta’s products.
This change from responsible innovation to social innovation is an example of ethics washing, which is a strategy that conceals unethical behavior by shifting the focus onto charitable giving. Because of this, it is extremely important to differentiate “tech for good” as the responsible creation of technology from the now-common philanthropic public relations phrase “tech for good.”
Innovation that is both responsible and profitable
It shouldn’t come as a surprise that those outside of corporate culture have been the source of the most sophisticated demands for responsible innovation.
Values such as self-awareness, fairness, and justice are discussed in the principles that are outlined in a white paper published by the Information and Communications Technology Council (ICTC), a non-profit organization based in Canada. These are concepts that are more recognizable to philosophers and ethicists than they are to CEOs and founders of new businesses.
The principles outlined by the ICTC require that those responsible for the development of technology go above and beyond the mitigation of negative consequences and instead work to reverse societal power imbalances.
One may wonder how the application of these principles relates to the most recent advancements in generative artificial intelligence. Who exactly is meant to be included in the category of “everyone” when OpenAI makes the assertion that it is “developing technologies that empower everyone”? And in what kind of situation will this alleged “power” be exercised?
The work of philosophers like Ruha Benjamin and Armond Towns, who are skeptical of the word “everyone” in these contexts, and who question the very identity of the “human” in human-centered technology, are reflected in these questions.
These kinds of considerations would bring the AI race to a halt, which might not be such a disastrous conclusion after all.
Value conflicts
Within the realm of information technology, there exists a consistent source of friction between monetary assessment and moral principles. These conflicts were initially addressed through the establishment of responsible innovation initiatives, but as of late, these kinds of efforts have been largely ignored.
The reaction of conservative pundits in the United States to the recent failure of Silicon Valley Bank is tangible evidence of the tension that exists. The “woke outlook” of the bank, as well as its dedication to responsible investing and equity initiatives, have been incorrectly blamed by several prominent Republicans, including Donald Trump, for the current state of affairs.
In contrast to what President Trump refers to as “common sense business practices,” according to Bernie Marcus, co-founder of Home Depot, “these banks are badly run because everybody is focused on diversity and all of the woke issues.”
It is possible that the future of responsible innovation will be determined by the degree to which so-called “common sense business practices” can be influenced by “woke” problems such as ethical, social, and environmental concerns. If one can simply brush aside ethical concerns by labeling them as “woke,” then the potential for responsible innovation in the future is about as bright as that of the CD-ROM.