DOD adopts 'ethical principles' for Artificial Intelligence

The Pentagon is adopting new ethical principles as it prepares to accelerate its use of artificial intelligence technology on the battlefield.

The new principles call for people to "exercise appropriate levels of judgment and care" when deploying and using AI systems, such as those that scan aerial imagery to look for targets.

Defense Department officials outlined the new approach Monday.

"The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order," said Defense Secretary Mark Esper.

It follows recommendations made last year by the Defense Innovation Board, a group led by former Google CEO Eric Schmidt.

The department’s AI ethical principles encompass five major areas:

-Responsible. DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.

-Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.

-Traceable. The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.

-Reliable. The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.

-Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

An existing 2012 military directive requires humans to be in control of automated weapons but doesn't address broader uses of AI.