- The Air Force has asked $5.8 billion in its budget to construct AI-powered XQ-58A Valkyrie aircraft.
- The fully autonomous crafts are suitable for completing suicide missions and protecting pilots, the Air Force said.
The Air Force is requesting a billions dollar budgetary allowance to build 1,000–2,000 unmanned aircraft with AI pilots.
According to The New York Times, XQ-58A Valkyrie aircraft are designed to act as a robotic wingman for human airmen, providing cover and navigating situations where a human pilot could struggle. They are particularly useful on suicide missions where a human would be unlikely to survive.
The craft will undergo testing later this year in a simulation where it will develop its own plan to pursue and eliminate a target over the Gulf of Mexico, according to the Times.
This kind of Valkyrie, according to an insider, can travel at 550 mph. Its range is 3,000 nautical miles, with an operational altitude of 45,000 feet. Only a few of these versions, such as the 1964-first-flown XB-70 Valkyrie bomber, were built since they needed pilots in the cockpit.
The budgetary estimate, which Congress has not yet authorized, states that it will cost $5.8 billion to construct the cars over a five-year period. As a datalink for F-22s, F-35s, and the Air Force's Skyborg program, which uses artificial intelligence to handle unmanned aircraft like the Valkyrie, the vehicle has been utilized in test flights by the Air Force for several years.
Each Valkyrie, according to The Times, will cost between $3 million and $25 million, which is significantly cheaper than a manned pilot plane.
Representatives from the Air Force and the Department of Defense did not immediately reply to Insider's request for comment.
Human rights campaigners are worried that the unmanned war machines could lead to a nightmarish future akin to that in "Terminator" despite the fact that the Air Force's "Next Generation of Air Dominance" program has received widespread military approval.
Mary Wareham, the advocacy director of Human Rights Watch's arms division and a supporter of international restrictions on autonomous lethal weapons, told the Times that outsourcing killing to machines and allowing computer sensors to take human life crossed a moral line.
These developments are referred to as "slaughterbots" by other AI-weapons opponents, like the nonprofit Future of Life Institute, because algorithmic decision-making in weapons enables faster combat, which can increase the risks of rapid conflict escalation and unpredictability — as well as the risk of developing weapons of mass destruction.
At least as far back as 2019, UN Secretary-General António Guterres stated that
"machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant, and should be prohibited by international law."