An interesting story made the rounds back in August. While conducting a search for “python lambda function list comprehension,” programmer Max Rosett was suddenly invited to attempt a cryptic coding challenge. After completing it, he was contacted by Google — where he now works. The challenge was part of Google’s novel recruiting strategy, which allowed the company to identify talent by analyzing search habits.
In the wake of the Paris killings, Western leaders have begun emphasizing a need to ratchet up intelligence efforts. Much of what they propose would involve personnel. There is a widespread feeling that too much is slipping through the cracks, and what seem like small lapses are leading to unacceptable catastrophes.
Hillary Clinton has called for an “intelligence surge.” David Cameron pledged that the U.K. would hire 1,900 new intelligence officers, and the French enacted sweeping emergency powers, many of them designed to strengthen and broaden intelligence-gathering operations.
Even before the terrible recent events, Washington was well aware of how badly it needs Silicon Valley. Prominent officials have made repeated visits to Palo Alto, and Defense has invested a great deal of money in tech. These efforts go well beyond recruiting, seeking innovations and tools that are considered “vital to the future of national defense.” Information technology and intelligence are now inextricably linked.
Despite these overtures, the relationship between government agencies and tech is at a low point. The issue of encryption has been the primary source of contention. Law enforcement seems to feel that tech firms are prioritizing customer retention and privacy over national defense, while security experts feel that the agencies fail to understand the technical difficulties underlying their requests, as well as the fact that they are undermining the very security they hope to achieve by making systems generally more vulnerable to attack.
When defense efforts are broad and untargeted, they will inevitably fail.
Things have gotten so bad that the tone among technical experts has shifted toward open contempt, which is understandable when their critics have openly admitted to lacking even basic knowledge of the technologies they opine upon. The current gridlock is particularly concerning, given the increasing urgency of preventing these attacks — especially when tech could be doing a great deal to help.
Finding good intelligence hires is difficult. It requires identifying individuals who can absorb large quantities of information while quickly and accurately gauging significance and risk. Information technologists are uniquely capable of facilitating these recruiting efforts with unconventional methods. By creatively employing underutilized data sources (such as search habits), tech companies can locate these skills more easily among the civilian population, effectively increasing the size and quality of the available talent pool for intelligence.
But where tech could be most useful is in building software tools. One of the biggest issues intelligence agencies face is akin to the “big data” problem so often talked about in tech: They need to find a way to effectively analyze and act upon the huge stores of data at their fingertips. Many of the information sharing programs in place today are concerned more with reporting and providing access to disparate bits of information rather than with gleaning significance and communicating it rapidly.
A recent piece in BankInfoSecurity does a nice job of distinguishing between intelligence sharing and information sharing. We need far more of the former. The signal-to-noise problem appears to already be overwhelming even our most sophisticated agencies. Terrorism benefits from the more general trend now emerging in which small groups can cause asymmetric damage from afar, and such threats are increasingly difficult to detect among the noise. When defense efforts are broad and untargeted, they will inevitably fail.
One area where tech could be extremely useful would be in analyzing volumes of financial data for anomalies. Indeed, this has been an area of increasing focus for governments since the attacks, and with good reason: It quickly became evident that financial information concerning the attackers and their networks was already on hand at a variety of financial institutions. Bank intelligence has already been used to strategically target ISIS oil assets.
Information technology and intelligence are now inextricably linked.
I suspect a useful defensive device for small-scale attacks would be something akin to a “financial wiretap,” which might alert intelligence analysts of suspicious activity in real time, as opposed to the slower reporting system in place today. Had such a tool existed prior to Paris, one or many intelligence services might have been instantly alerted to the prepaid debit card transactions that the attackers used to move funds, and said services could have quickly prioritized them as targets.
But such tools will only be useful if they are aimed at suspects reasonably under suspicion already (or else they will produce huge numbers of false positives from the broader population). And if those in the tech industry are to be expected to produce them, they must trust that they will be used responsibly in the national interest, with clear and constitutional bases for their use.
We cannot simply press-gang the industry into it, or expect them to blindly trust that the groups that have abused their power before will not do so again. Likewise, it can be hoped that if law enforcement makes these overtures, the industry will become more welcoming and contribute to the important work that needs to be done.
During an iconic scene in the show Mad Men, Don Draper says, “If you don’t like what’s being said, change the conversation.” Neither American national security nor the tech industry seems to be benefiting much from the bitter fight over encryption. They should follow Don Draper’s advice … we urgently need them to.