Nevada’s senators have their eyes on the future, and it is quickly becoming the present.
Sen. Catherine Cortez Masto introduced a bill this week calling for a National Security Commission on Artificial Intelligence, and Sen. Dean Heller announced the selection of Reno as one of 10 U.S. sites chosen “to help accelerate drone integration into the national airspace system.”
Both of these developments will radically reshape the world we live in. The really big change will happen when they merge and we find ourselves surrounded by drones that think and act on their own accord.
Science fiction? No, the technology is already here.
We’ve come a long way beyond the push-button necklace people wore to notify a medical service that they have “fallen and can’t get up.” Today, people are equipping themselves with high-tech monitoring devices – some so small that they now fit into a ring instead of a bracelet – and drones are capable of speeding defibrillators or other medical aid if they receive a trouble signal.
Sure, we are still a long way from the Skynet apocalypse from the “Terminator” movie franchise. But the prospect of flying robots sniffing out crime and enforcing law and order is not far away.
“As AI grows into an important tool for our national security, and a driver of economic growth, comprehensive public awareness and oversight is increasingly important,” said Cortez Masto. “The commission proposed in this bill will provide guidance on how we cultivate AI to help ensure we stay ahead of countries like China ... while also building guardrails to make certain the U.S. government responsibly uses AI.”
Among the 10 drone sites approved by U.S. Secretary of Transportation Elaine Chao, Grand Forks Air Force Base in North Dakota already has an unmanned aircraft mission that is looking for new uses. North Dakota Lt. Gov. Brent Sanford envisioned drones helping with oil field, flood and weather monitoring, as well as finding missing persons.
Steven Bradbury, a lawyer for the federal Transportation Department, said drones have caused some “apprehension” with the public but one of the initiative’s biggest goals will be increased “community awareness and acceptance” of unmanned aircraft.
Reno’s project includes the private company Flirtey, which has already developed the use of unmanned aircraft to deliver emergency medical supplies such as automated external defibrillators to jump-start the hearts of cardiac arrest victims. Imagine being stung by a bee and your ring monitor indicates you are going into anaphylactic shock. A Flirtey drone could be automatically launched to deliver an EpiPen to treat your allergic reaction.
Drones can fly quickly to places that are hard to reach, just like a helicopter but without the expense or danger to emergency flight crews. Their seems to be no limit on their potential use in remote areas like rural Nevada. According to Flirtey, deploying defibrillators at drone speed could increase the cardiac arrest survival rate from 10 percent today to approximately 47 percent tomorrow.
FOX News reported in March that a Chinese tech company is partnering with Microsoft “to give drones A.I. superpowers.”
“Windows developers will soon be able to employ drones, A.I. and machine learning technologies to create intelligent flying robots ...” DJI President Roger Luo said in a statement.
All technology comes with a dark side, of course, and Cortez Masto is wise to see the need for oversight of the artificial intelligence industry. Her bill defines an artificial system as one “that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets.”
We wouldn’t want – for example – a cyberbot taking over our identity. Or is it too late for that? The Verge reported this week on a “widespread outcry over the ethical dilemmas raised by Google’s new Duplex system, which lets artificial intelligence mimic a human voice to make appointments.”
Google Assistant will soon be speaking in a convincingly human manner that includes ticks like “uh” and “um” and other colloquial phrases, a development one tech critic called “horrifying.”
Google raised eyebrows among its own employees earlier this year when the Department of Defense started using its TensorFlow AI system in its Project Maven.
According to an article in The Guardian, the program was established in July 2017 “to use machine learning and artificial intelligence to analyse the vast amount of footage shot by U.S. drones. The initial intention is to have AI analyse the video, detect objects of interest and flag them for a human analyst to review.”
Similar technology is used to analyze YouTube video in order to weed out terrorist propaganda. The software deleted about 5 million videos for policy violations in last year’s fourth quarter, according to an article in Technology News.
Humans aren’t entirely out of the picture, however.
“YouTube said it still needed an in-house team of humans to verify automated findings on an additional 1.6 million videos that were removed only after some users watched the clips,” the article said.
Putting a damper on violent extremism is a good thing, but the technology will not stop there. A CNNMoney report this week was headlined “Google wants artificial intelligence to choose your news.”
“Google News says it will draw content from ‘trusted news sources,’ though it’s not exactly clear what qualifies as a trusted news source,” said the report. “The service will not use human editors, nor will it partner with specific news organizations.”
Does this mean we can look forward to “fake news” eventually being replaced by “artificial news”? I wonder what cyberbots will consider newsworthy.