Microsoft co-founder Bill Gates went against the open letter by Tesla CEO Elon Musk and Apple Co-founder Steve Wozniak, which was about the privacy concerns and employment challenges posed by the different AI bots.
The open letter contained many concerns about the rise of AI being an existential threat to the human race. It said that advanced AI could represent a deep change in the entire history of life on Earth.
It added that, therefore, this should be planned for and managed with proportional care and resources. The letter continued that it was unfortunate that the current level of planning and management was not happening.
It did not happen even after the recent months of witnessing AI labs obsessed with uncontrollable competition in order to devise and deploy the most effective digital minds that no one can understand.
The most challenging part was that even the creators of the AI tools could not understand or predict the next move of the tool. The tools were also beyond the control of the ones who gave them form.
The open letter was published on March 29, 2023, by the Future of Life Institute, an international nonprofit organization. The institute aims at reducing the risks that advanced artificial intelligence can potentially cause an existential crisis in humanity.
The letter went viral over the internet and invited the attention of Elon Musk the CEO of Tesla, SpaceX, and Twitter, and Steve Wozniak the co-founder of Apple Inc.
Both of them now promote the concerns mentioned in it. They also signed the letter. The letter calls for a 6-month pause on the development of AI tools.
The letter now has more than 14,000 signatures. The tech giants have disagreed with the demands that the letter put forward. They view that a 6-month pause in development can cause the entire AI tool to lag in the marketplace.
The beta tools currently running on its beta versions are racing with each other to launch the most advanced version of the AI tool. The beta version will test bugs and will analyze the functionality of the latest released features.
How Did The AI Race Begin?
The first big hit was the tool ChatGPT by OpenAI which dominates the virtual era now. It resulted in other tech giants entering the race to develop and launch their own AI tools. A few examples of this can be seen in the launch of Bing AI chatbot by Microsoft and Apprentice Bard by Alphabet.
Presently, A majority of the AI tools are running in their beta version, that is the version in which an application or program is launched initially. The beta version is not the completed version of the product.
Bill Gates was one of the first persons to come forward to defend AI tools. He said that he does not think asking only one particular group to halt will solve the challenge. He further said that a pause would be difficult to enforce across a global industry.
Gates added that however, he could not disagree that the industry needs more research to identify the tricky areas.
What Do The AI Leaders Think?
The pioneers in the field of AI development view that as of now AI technologies have not grown to a level to pose an imminent concern. Anthropic, an American AI safety and research company said that the upcoming AI systems could become more powerful in the next ten years.
This was released by the company in a blog post. The post further read that it would be ideal to build guardrails as soon as possible to reduce future risks.
In the same blog post, Anthropic said that the construction of safety measures was necessary but the term stays bizarre as no one has an exact idea of what exactly the measures could be. Anthoroic has not yet mentioned whether it would support the six-month halt or it would not.
The leader of the industry ChatGPT also has many security and privacy concerns associated with it. Italy has already banned ChatGPT, which is a temporary ban over privacy issues caused by an OPen AI data leak.
Sam Altman, the CEO of OpenAI has proposed some measures for a good AGI future.
Sam Altman’s Ideas For A Better AGI (Artificial General Intelligence) Future:
- An effective international regulatory framework that includes democratic governance
- The technical ability to align a superintelligence
- Coordination among the leading AGI efforts
Altman mentioned these in a Twitter post.
What Has The Government Done Regarding the AI Privacy Concerns?
The U.S. Federal Trade Commission (FTC) recently released guidance for labs developing AI chatbots. In addition, many American states have passed privacy laws regarding the same concerns.
The laws establish that any of the companies can’t force the user or make it mandatory that the user has to provide personal details to the application or the program.
It clearly mentions that there should be an option to opt-out from providing the personal data of a user for the device to make AI automated decisions. States with such laws are Colorado, California, Connecticut, Virginia, and Utah.