The CIA and other US intelligence agencies will soon have an AI chatbot similar to ChatGPT. The program, revealed on Tuesday by Bloomberg, will train on publicly available data and provide sources alongside its answers so agents can confirm their validity. The aim is for US spies to more easily sift through ever-growing troves of information, although the exact nature of what constitutes “public data” could spark some thorny privacy issues.
“We’ve gone from newspapers and radio, to newspapers and television, to newspapers and cable television, to basic internet, to big data, and it just keeps going,” Randy Nixon, the CIA’s director of Open Source Enterprise, said in an interview with Bloomberg. “We have to find the needles in the needle field.” Nixon’s division plans to distribute the AI tool to US intelligence agencies “soon.”
Nixon says the tool will allow agents to look up information, ask follow-up questions and summarize daunting masses of data. “Then you can take it to the next level and start chatting and asking questions of the machines to give you answers, also sourced,” he said. “Our collection can just continue to grow and grow with no limitations other than how much things cost.”
The CIA hasn’t specified which AI tool (if any) it’s using as the foundation for its chatbot. Once the tool is available, the entire 18-agency US intelligence community will have access to it. However, lawmakers and the public won’t be able to use it.
Nixon said the tool would follow US privacy laws. However, he didn’t state how the government would safeguard it from leaking onto the internet or using information that’s sketchily acquired but technically “public.” Federal agencies (including the Secret Service) and police forces have been caught bypassing warrants and using commercial marketplaces to buy troves of data. These have included phones’ locations, which the government can technically describe as open-source.
“The scale of how much we collect and what we collect on has grown astronomically over the last 80-plus years, so much so that this could be daunting and at times unusable for our consumers,” Nixon said. He envisions the tool allowing a scenario “where the machines are pushing you the right information, one where the machine can auto-summarize, group things together.”
The US government’s decision to move forward with the tool could be influenced by China, which has stated that it wants to surpass its rivals and become the world’s de facto AI leader by 2030.
The US has taken steps to counter China’s influence while examining AI’s domestic and economic risks. Last year, the Biden administration launched a Blueprint for an AI Bill of Rights, defining the White House’s generative AI values. It has also pushed for an AI risk management framework and invested $140 million in creating new AI and machine learning research institutes. In July, President Biden met with leaders from AI companies, who agreed to (non-binding) statements that they would develop their products ethically.
Leave a Reply