Hexbyte  Tech News  Wired In Project Maven’s Wake, the Pentagon Seeks AI Tech Talent

Hexbyte Tech News Wired In Project Maven’s Wake, the Pentagon Seeks AI Tech Talent

Hexbyte Tech News Wired

The American military is desperately trying to get a leg up in the field of artificial intelligence, which top officials are convinced will deliver victory in future warfare. But internal Pentagon documents and interviews with senior officials make clear that the Defense Department is reeling from being spurned by a tech giant and struggling to develop a plan that might work in a new sort of battle—for hearts and minds in Silicon Valley.

The battle began with an unexpected loss. In June, Google announced it was pulling out of a Pentagon program—the much-discussed Project Maven—that used the tech giant’s artificial intelligence software. Thousands of the company’s employees had signed a petition two months earlier calling for an end to its work on the project, an effort to create algorithms that could help intelligence analysts pick out military targets from video footage.

Inside the Pentagon, Google’s withdrawal brought a combination of frustration and distress—even anger—that has percolated ever since, according to five sources familiar with internal discussions on Maven, the military’s first big effort to utilize AI in warfare.

About This Story

This article was produced in partnership with the Center for Public Integrity, a nonprofit, nonpartisan news organization.

“We have stumbled unprepared into a contest over the strategic narrative,” said an internal Pentagon memo circulated to roughly 50 defense officials on June 28. The memo depicted a department caught flat-footed and newly at risk of alienating experts critical to the military’s artificial intelligence development plans.

“We will not compete effectively against our adversaries if we do not win the ‘hearts and minds’ of the key supporters,” it warned.

Maven was actually far from complete and cost only about $70 million in 2017, a molecule of water in the Pentagon’s oceanic $600 billion budget that year. But Google’s announcement exemplified a larger public relations and scientific challenge the department is still wrestling with. It has responded so far by trying to create a new public image for its AI work and by seeking a review of the department’s AI policy by an advisory board of top executives from tech companies.

The reason for the Pentagon’s anxiety is clear: It wants a smooth path to use artificial intelligence in weaponry of the future, a desire already backed by the promise of several billion dollars to try to ensure such systems are trusted and accepted by military commanders, plus billions more in expenditures on the technologies themselves.

The exact role that AI will wind up playing in warfare remains unclear. Many weapons with AI will not involve decision-making by machine algorithms, but the potential for them to do so will exist. As a Pentagon strategy document said in August: “Technologies underpinning unmanned systems would make it possible to develop and deploy autonomous systems that could independently select and attack targets with lethal force.”

Developing artificial intelligence, officials say, is unlike creating other military technologies. While the military can easily turn to big defense contractors for cutting-edge work on fighter jets and bombs, the heart of innovation in AI and machine learning resides among the non-defense tech giants of Silicon Valley. Without their help, officials worry, they could lose an escalating global arms race in which AI will play an increasingly important role, something top officials say they are unwilling to accept.

“If you decide not to work on Maven, you’re not actually having a discussion on if artificial intelligence or machine learning are going to be used for military operations,” Chris Lynch, a former tech entrepreneur who now runs the Pentagon’s Defense Digital Service, said in an interview. AI is coming to warfare, he says, so the question is, which American technologists are going to engineer it?

Lynch, who recruits technical experts to spend several years working on Pentagon problems before returning to the private sector, said that AI technology is too important, and that the agency will proceed even if it has to rely on lesser experts. But without the help of the industry’s best minds, Lynch added, “we’re going to pay somebody who is far less capable to go build a far less capable product that may put young men and women in dangerous positions, and there may be mistakes because of it.”

Google isn’t likely to shift gears soon. Less than a week after announcing that the company would not seek to renew the Maven contract in June, Google released a set of AI principles which specified that the company would not use AI for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”

Some defense officials have complained since then that Google was being unpatriotic, noting that the company was still pursuing work with the Chinese government, the top US competitor in artificial intelligence technology.

“I have a hard time with companies that are working very hard to engage in the market inside of China, and engaging in projects where intellectual property is shared with the Chinese, which is synonymous with sharing it with the Chinese military, and then don’t want to work for the US military,” General Joe Dunford, chairman of the Joint Chiefs of Staff, commented while speaking at a conference in November.

In December testimony before congress, Google CEO Sundar Pichai acknowledged that Google had experimented with a program involving China, Project Dragonfly, aimed at developing a model of what government-censored search results would look like in China. However, Pichai testified that Google currently “has no plans to launch in China.”

Project Maven’s aim was to simplify work for intelligence analysts by tagging object types in video footage from drones and other platforms, helping analysts gather information and narrow their focus on potential targets, according to sources familiar with the partly classified program. But the algorithms did not select the targets or order strikes, a longtime fear of those worried about the intersection of advanced computing and new forms of lethal violence.

Many at Google nonetheless saw the program in alarming terms.

“They immediately heard drones and then they thought machine learning and automatic target recognition, and I think it escalated for them pretty quickly about enabling targeted killing, enabling targeted warfare,” said a former Google employee familiar with the internal discussions.

Google is just one of the tech giants that the Pentagon has sought to enlist in its effort to inject AI into modern warfare technology. Among the others: Microsoft and Amazon. After Google’s announcement in June more than a dozen large defense firms approached defense officials, offering to take over the work, according to current and former Pentagon officials.

But Silicon Valley activists also say the industry cannot easily ignore the ethical qualms of tech workers. “There’s a division between those who answer to shareholders, who want to get access to Defense Department contracts worth multimillions of dollars, and the rank and file who have to build the things and who feel morally complicit for things they don’t agree with,” the former Google employee said.

In an effort to bridge this gulf and dampen hard-edged opposition from AI engineers, the Defense Department has so far undertaken two initiatives.

The first, formally begun in late June, was to create a Joint Artificial Intelligence Center meant to oversee and manage all of the military’s AI efforts, with an initial focus on PR-friendly humanitarian missions. It’s set to be run by Lieutenant General Jack Shanahan, whose last major assignment was running Project Maven. In a politically shrewd decision, its first major initiative is to figure out a way to use AI to help organize the military’s search and rescue response to natural disasters.

“Our goal is to save lives,” Brendan McCord, one of the chief architects of the Pentagon’s AI strategy, said while speaking at a technical conference in October. “Our military’s fundamental role, its mission, is to keep the peace. It is to deter war and protect our country. It is to improve global stability, and it’s to ultimately protect the set of values that came out of the Enlightenment.”

The second initiative is to order a new review of AI ethics by an advisory panel of tech experts, the Defense Innovation Board, which includes former Google CEO Eric Schmidt and LinkedIn cofounder Reid Hoffman.

That review, designed to develop principles for the use of AI by the military, is being managed by Joshua Marcuse, a former adviser to the secretary of defense on innovation issues who is now executive director of the board. Set to take about nine months, the advisory panel will hold public meetings with AI experts, while an internal Pentagon group also considers questions. Then it will forward recommendations to secretary of defense James Mattis about the ways that AI should or should not be injected into weapons programs.

“This has got to be about actually looking in the mirror and being willing to impose some constraints on what we will do, on what we won’t do, knowing what the boundaries are,” Marcuse said in an interview.

To make sure the debate is robust, Marcuse said that the board is seeking out critics of the military’s role in AI.

“They have a set of concerns, I think really valid and legitimate concerns, about how the Department of Defense is going to apply these technologies, because we have legal authority to invade people’s privacy in certain circumstances, we have legal authority to commit violence, we have legal authority to wage war,” he said.

Resolving those concerns is critical, officials say, because of the difference in how Washington and Beijing manage AI talent. China can conscript experts to work on military problems, whereas the United States has to find a way to interest and attract outside experts.

“They have to choose to work with us, so we need to offer them a meaningful, verifiable commitment that there are real opportunities to work with us where they can feel confident that they’re the good guys,” Marcuse said.

Despite his willingness to discuss potential future constraints on AI usage, Marcuse said he didn’t think the board would try to change the Pentagon’s existing policy on autonomous weapons that depend on AI, which was put in place by the Obama administration in 2012.

That policy, which underwent a minor technical revision by the Trump administration in May 2017, doesn’t prevent the military from using artificial intelligence in any of its weapons systems. It mandates that commanders have “appropriate levels of human judgment” over any AI-infused weapons systems, although the phrase isn’t further defined and remains a source of confusion within the Pentagon, according to multiple officials there.

It does, however, require that before a computer could be programmed to initiate deadly action, the weapons system that contains it must undergo special review by three senior Pentagon officials—in advance of its purchase. To date that special review hasn’t been undertaken.

In late 2016, during the waning days of the Obama administration, the Pentagon took a new look at the 2012 policy and decided in a classified report that no major change was needed, according to a former defense official familiar with the details. “There was nothing that was held up, there was no one who thought, ‘Oh we have to update the directives,’” the former official said.

The Trump administration nonetheless has internally discussed making it clearer to weapons engineers within the military—who it fears have been reluctant to inject AI into their designs—that the policy doesn’t ban the use of autonomy in weapons systems. The contretemps in Silicon Valley over Project Maven at least temporarily halted that discussion, prompting the department’s leaders to try first to win the support of the Defense Innovation Board.

But one way or another, the Pentagon intends to integrate more AI into its weaponry. “We’re not going to sit on the sidelines as a new technology revolutionizes the battlefield,” Marcuse said. “It’s not fair to the American people, it’s not fair to our service members who we send into harm’s way, and it’s not fair to our allies who depend on us.”


The Center for Public Integrity is a nonprofit, nonpartisan, investigative newsroom in Washington, DC. More of its national security reporting can be found here.


More Great WIRED Stories

Read More

Hexbyte  Hacker News  Computers Introducing Project Mu – Windows Developer Blog

Hexbyte Hacker News Computers Introducing Project Mu – Windows Developer Blog

Hexbyte Hacker News Computers

The Microsoft Devices Team is excited to announce Project Mu, the open-source release of the Unified Extensible Firmware Interface (UEFI) core leveraged by Microsoft products including both Surface and the latest releases of Hyper-V. UEFI is system software that initializes hardware during the boot process and provides services for the operating system to load. Project Mu contributes numerous UEFI features targeted at modern Windows based PCs. It also demonstrates a code structure and development process for efficiently building scalable and serviceable firmware. These enhancements allow Project Mu devices to support Firmware as a Service (FaaS). Similar to Windows as a Service, Firmware as a Service optimizes UEFI and other system firmware for timely quality patches that keep firmware up to date and enables efficient development of post-launch features. 

When first enabling FaaS on Surface, we learned that the open source UEFI implementation TianoCore was not optimized for rapid servicing across multiple product lines. We spent several product cycles iterating on FaaS, and have now published the result as free, open source Project Mu! We are hopeful that the ecosystem will incorporate these ideas and code, as well as provide us with ongoing feedback to continue improvements.

Hexbyte  Hacker News  Computers Project Mu onscreen keyboard

Hexbyte Hacker News Computers Project Mu includes:

  • A code structure & development process optimized for Firmware as a Service
  • An on-screen keyboard
  • Secure management of UEFI settings
  • Improved security by removing unnecessary legacy code, a practice known as attack surface reduction
  • High-performance boot
  • Modern BIOS menu examples
  • Numerous tests & tools to analyze and optimize UEFI quality.

Hexbyte  Hacker News  Computers Project Mu boot configuration

We look forward to engagements with the ecosystem as we continue to evolve and improve Project Mu to our mutual benefit!

Check out Project Mu Documentation and Code here: https://microsoft.github.io/mu/

Updated December 19, 2018 1:03 pm

Read More

Project Capillary: End-to-end encryption for push messaging, simplified

Project Capillary: End-to-end encryption for push messaging, simplified

Posted by Giles Hogben, Privacy Engineer and Milinda Perera, Software Engineer

Developers already use HTTPS to communicate with Firebase Cloud Messaging (FCM). The channel between FCM server endpoint and the device is encrypted with SSL over TCP. However, messages are not encrypted end-to-end (E2E) between the developer server and the user device unless developers take special measures.

To this end, we advise developers to use keys generated on the user device to encrypt push messages end-to-end. But implementing such E2E encryption has historically required significant technical knowledge and effort. That is why we are excited to announce the Capillary open source library which greatly simplifies the implementation of E2E-encryption for push messages between developer servers and users’ Android devices.

We also added functionality for sending messages that can only be decrypted on devices that are unlocked. This includes support for decrypting messages on devices using File-Based Encryption (FBE): encrypted messages are cached in Device Encrypted (DE) storage and message decryption keys are stored in Android Keystore, requiring user authentication. This allows developers to specify messages with sensitive content, that remain encrypted in cached form until the user has unlocked and decrypted their device.

The library handles:

  • Crypto functionality and key management across all versions of Android back to KitKat (API level 19).
  • Key generation and registration workflows.
  • Message encryption (on the server) and decryption (on the client).
  • Integrity protection to prevent message modification.
  • Caching of messages received in unauthenticated contexts to be decrypted and displayed upon device unlock.
  • Edge-cases, such as users adding/resetting device lock after installing the app, users resetting app storage, etc.

The library supports both RSA encryption with ECDSA authentication and Web Push encryption, allowing developers to re-use existing server-side code developed for sending E2E-encrypted Web Push messages to browser-based clients.

Along with the library, we are also publishing a demo app (at last, the Google privacy team has its own messaging app!) that uses the library to send E2E-encrypted FCM payloads from a gRPC-based server implementation.

What it’s not

  • The open source library and demo app are not designed to support peer-to-peer messaging and key exchange. They are designed for developers to send E2E-encrypted push messages from a server to one or more devices. You can protect messages between the developer’s server and the destination device, but not directly between devices.
  • It is not a comprehensive server-side solution. While core crypto functionality is provided, developers will need to adapt parts of the sample server-side code that are specific to their architecture (for example, message composition, database storage for public keys, etc.)

You can find more technical details describing how we’ve architected and implemented the library and demo here.

Read More