Less than a year after OpenAI quietly hinted it wanted to do business with the Department of Defense, procurement documents obtained by The Intercept show that U.S. Africa Command (AFRICOM) does not have access to OpenAI’s technology for its mission. It has been shown that it is considered “essential”.
The Sept. 30 document lays out the rationale for AFRICOM to purchase cloud computing services directly from Microsoft as part of its $9 billion Joint Combat Cloud Capabilities contract, rather than seeking another provider on the open market. There is. “USAFRICOM operates in a dynamic and evolving environment where IT plays a critical role in achieving mission objectives,” the document states, including “African Mission Partners (and) USAFRICOM It also includes the important mission of supporting joint exercises.
This document, labeled Controlled Unclassified Information, is marked FEDCON to indicate that it is not intended for distribution beyond the government or contractors. This indicates that AFRICOM’s request has been approved by the Defense Information Systems Agency. The purchase price has been redacted, but approval documents state it is worth less than $15 million.
Like other parts of the Pentagon, AFRICOM, which oversees the Pentagon’s activities, including local military cooperation with U.S. allies across Africa, has a growing appetite for cloud computing. The Department of Defense already purchases cloud computing access from Microsoft through the Joint Combat Cloud Capabilities project. This new document reflects AFRICOM’s desire to immediately purchase Microsoft Azure cloud services, including OpenAI software, without contractual red tape and consideration of other vendors. AFRICOM notes that “the ability to support advanced AI/ML workloads is critical, including services for search, natural language processing (machine learning), and integrated analytics for data processing.” According to AFRICOM, Microsoft’s Azure cloud platform, including the suite of tools provided by OpenAI, is the only cloud provider that can meet its needs.
Microsoft began selling OpenAI‘s GPT-4 large-scale language model to defense customers in June 2023. Following revelations earlier this year that OpenAI had changed its mind on military operations, the company announced a cybersecurity collaboration with DARPA in January, announcing that its tools include: Used for suicide prevention efforts among unspecified veterans. In April, Microsoft proposed to the Department of Defense the use of OpenAI‘s image generation tool DALL-E in its command and control software. But the AFRICOM document marks the first confirmed purchase of OpenAI products by a U.S. combatant command with a killing mission.
OpenAI’s stated corporate mission remains “to ensure that artificial general intelligence benefits all humanity.”
AFRICOM documents mark the first confirmed purchase of OpenAI products by a U.S. combatant command with a killing mission.
The document states that “OpenAI tools” are one of the “unique capabilities” provided by Microsoft and are “essential to ensuring that the cloud services provided meet USAFRICOM’s mission and operational needs.” ” is stated. … Without access to Microsoft’s integrated suite of AI tools and services, USAFRICOM will face significant challenges in analyzing vast amounts of data and extracting actionable insights. …This can lead to delayed decision-making, reduced situational awareness, and reduced agility in responding to dynamically evolving threats across the African continent. ” Defense and intelligence agencies around the world are showing strong interest in using large-scale language models to sift through treasure troves of information and quickly transcribe and analyze interrogation audio data.
Microsoft invested $10 billion in OpenAI last year and now, in addition to reselling its technology, wields significant influence over the company. In February, The Intercept and other digital news outlets sued Microsoft and OpenAI for using their journalism without permission or credit.
“OpenAI is not affiliated with U.S. Africa Command,” an OpenAI spokesperson told The Intercept, referring questions to Microsoft. Microsoft did not immediately respond to a request for comment. So did an AFRICOM spokesperson.
“It is very concerning that OpenAI tools are being explicitly used for ‘integrated analysis for data processing’ in line with USAFRICOM’s mission objectives,” said Dr. said one Heidy Khlaaf. Open AI. “Specifically, we state that we believe these tools improve efficiency, accuracy, and scalability, but in reality, we find that these tools consistently produce output that is highly inaccurate. These claims demonstrate a worrying lack of awareness by those procuring these technologies about the high risks these tools pose to mission-critical environments.”
Since OpenAI quietly removed the portion of its terms of service banning military activity in January, the company has steadily gained inroads with U.S. national security officials, using impressive but often inaccurate tools like ChatGPT. are eager to integrate. In June, OpenAI added former National Security Director Paul Nakasone, an appointee of President Trump, to its board of directors. The company’s current head of national security partnerships is Katrina Mulligan, a Department of Defense graduate who previously worked in “special operations and irregular warfare,” according to her LinkedIn profile.
On Thursday, the company published an article outlining its “approach to AI and national security” in response to a White House directive directing the Department of Defense to accelerate the deployment of tools like those made by OpenAI. According to the post, the “values that guide our national security efforts” include “democratic values,” “human rights,” and “accountability,” and that “all AI applications , especially those related to government and national security, should be subject to oversight, clear usage guidelines, and ethical standards.” OpenAI’s language states that security and intelligence agencies are not “aligned with democratic values. ”, the Washington Post reported, clearly reflecting a White House order prohibiting the use of artificial intelligence in ways such as:
AFRICOM’s documents provide few details on how exactly it will use OpenAI tools, but the command’s regular impact on coups, civilian killings, torture, and covert warfare in Africa. would seem at odds with OpenAI’s professed national security framework. Last year, African Commandant General Michael Langley told the House of Commons Armed Services Committee that his forces had overthrown the Guinean government and declared himself leader of the Guinean government in 2021, along with African Forces trainee Colonel Mamadi Doumbouya. He said that they share “core values.”
Although U.S. military activities in Africa have received relatively little attention compared to U.S. Central Command, which oversees U.S. forces in the Middle East, AFRICOM’s presence is significant and frequently the subject of controversy. Despite claims of a “light footprint” on the continent, The Intercept reported in 2020 on a once-secret AFRICOM map showing “a network of 29 U.S. military bases stretching from one end of Africa to the other.” Ta. Since its founding in 2007, many of AFRICOM’s purposes have focused on training and advising African forces, providing low-profile missions for special operations forces, and establishing drone bases to counter extremist groups in the Sahel, Lake Chad Basin, and Horn of Africa. This includes operation. Safety and stability on the continent. The results were dire. The State Department counted only nine total terrorist attacks across Africa in 2002 and 2003, the first years of U.S. counterterrorism assistance on the continent. The number of attacks per year by Islamic extremist groups in Africa now exceeds 6,700, an increase of 74,344 percent, according to the Africa Center for Strategic Studies, a research arm of the Pentagon.
At least 15 police officers who benefited from U.S. security assistance in West Africa and the Greater Sahel region during the war on terror have been involved in 12 coups, including in Niger last year, as violence escalates. (At least five leaders of the July 2023 coup were backed by the United States, according to U.S. officials.) U.S. allies have also been implicated in numerous alleged human rights abuses. In 2017, The Intercept reported that a military base in Cameroon used by AFRICOM to conduct surveillance drone flights was being used to torture military prisoners.
Data handling has long been a challenge for AFRICOM. For example, after The Intercept compiled the number of U.S.-trained coup leaders on the continent, the command did not know how many coups had been carried out on its charges, and it did not know how many such takeovers it had carried out. He admitted that he did not even keep a list of the victims. It happened. “AFRICOM does not maintain a database containing this information,” spokesperson Kelly Cahalan told The Intercept last year.
AFRICOM’s mismanagement of information was also fatal. Following a 2018 drone attack in Somalia, AFRICOM said it killed “five terrorists” and destroyed one vehicle, and that “no civilians were killed in the strike.” Despite months of “target development,” the attack on a pickup truck killed at least three people, including Luul Dahir Mohamed and his girlfriend, according to a confidential U.S. military investigation obtained by The Intercept through the Freedom of Information Act. Probably five civilians were found dead. daughter Mariam Shirou Muse, 4 years old;