Matthew Connelly’s essay on state secrets and archival negligence offers a fascinating new lens on an old problem. The literature on national security secrecy tends to focus on the threat it poses to democratic accountability and participation. I, for one, have spent years thinking and writing about excessive government secrecy, and yet I’ve never considered the issue from the perspective of an archivist or historian.

Connelly describes numerous challenges to modern recordkeeping, ranging from the problem of “bit rot” (loss of data resulting from outmoded software or hardware) to the inadequacy of keyword searches as a substitute for finding aids. But one challenge that particularly concerns him is the government’s generation of a massive amount of classified material coupled with an ineffective system for declassifying it. I will limit myself to commenting on this aspect of his essay, since the rest of it goes beyond my expertise.

Connelly aptly conveys the scope of problem — both how long it has existed and how massive it is. He outlines the skewed incentives that lead officials to classify documents unnecessarily and in ever-increasing numbers. He shows that the pace of declassification has fallen far behind the pace of classification, creating a growing and potentially insurmountable backlog. He makes a convincing case that our system of historical recordkeeping is in critical condition, on account of classification-related problems as well as other causes, and that “[i]f this system collapses, America’s commitment to learning from its history will become a thing of the past, because the past itself will be impossible to recover.”

Although they are understandably not Connelly’s focus, it’s worth highlighting some of the more commonly cited and immediate harms of overclassification. Depriving people of information about the government’s policies or activities impedes their ability to engage in informed debate and to cast informed votes. Overclassification thus damages the most basic mechanisms of democracy. It also undermines the rule of law, as the government cannot be held accountable for violations that are concealed from the public. And it subverts the constitutional system of checks and balances, making it more difficult in myriad ways for Congress and the courts to provide meaningful oversight.

Connelly proposes several solutions, all of which have merit. He leads with the issue of resources, exhorting Congress to increase funding for the National Archives and Records Administration. Funding is a dry subject, and legal and policy experts usually stick to sexier remedies—things like curbing executive privilege or beefing up judicial review. But as Joe Biden has said, “Don’t tell me what you value. Show me your budget, and I’ll tell you what you value.” Connelly is to be commended for understanding the critical importance of funding to this issue. I would expand on his recommendation and urge Congress to earmark funding for agencies’ own declassification efforts, or require agencies to spend a certain percentage of their information security budgets on declassification. As Connelly points out, the ratio of declassification spending to classification spending has decreased dramatically in recent years.

Connelly also recommends that the executive branch use data-science techniques, such as machine learning, to build a “declassification engine”—in lay terms, computer programs that could help identify sensitive information for purposes of speeding up declassification review. A CIA pilot project in this area showed promising results, but Congress hasn’t allocated the necessary funding to follow up. I agree with Connelly’s recommendation, with one caveat. It is critical that automated technology not be used to make decisions about whether to classify information in the first instance (as some have proposed). When “trained” with information that has already been classified, computers can identify documents that contain roughly the same information—a convenient aide to declassification. But initial determinations of whether national security would be harmed by the disclosure of information are inherently subjective and require careful thought and judgment.

As for “derivative” declassification — the practice of classifying documents on the ground that they contain previously classified information — automation might seem useful in theory, but it could be disastrous in practice. Computer programs would occasionally make mistakes, especially in the beginning. Given that officials always err on the side of classification, they would likely correct a computer program’s false negatives, but not its false positives. The “machine learning” function would then internalize, perpetuate, and magnify the human tendency toward overclassification.

I would add two other recommendations in the area of declassification. Connelly mentions so-called “automatic,” systematic, and discretionary declassification, but he leaves out the fourth way in which records may be declassified: mandatory declassification review (MDR). Under MDR, members of the public can submit requests to agencies to declassify particular documents. They can appeal denials, first within the agency and then to the Interagency Security Classification Appeals Panel. The rate of declassification under MDR is orders of magnitude higher than under the Freedom of Information Act: More than 90% of requested documents are declassified either in whole or in part. But MDR is underfunded, understaffed, and notoriously slow. Again, Congress should dedicate more funding in this area. In addition, agencies should create an expedited review track — similar to the expedited review track that exists in FOIA—when the requested records address a matter of significant public interest.

To the four existing declassification mechanisms, I would add a fifth. All classified documents must be marked with a declassification date. In 2017, most documents were declassified for periods of 10 years or less. Yet incredibly, there is no regular system for performing declassification reviews before “automatic” declassification kicks in at 25 years. Unless someone happens to request a document through FOIA or MDR, it is likely to remain classified even though its declassification date came and went years ago. Going forward, classified documents should be electronically tagged to generate a prompt when the document reaches its declassification date, triggering a requirement to review. This would not only make more information available to the public sooner but also reduce the burden on the automatic review process that is triggered at the 25-year mark.

Even with these reforms, though, declassification will never be able to keep up unless we reduce the amount of classified information pouring into the system. Connelly alludes to this, and notes the need to adopt “a more rational, risk-management approach to protecting sensitive information.” He is correct, but the executive branch needs more than an attitude adjustment: It needs narrower and more specific criteria for classification. The executive order that currently governs classification contains no definition of “national security,” nor any examples of harm that would justify classification. Moreover, the categories of classifiable information listed in the order are far too broad. For instance, they include “intelligence sources or methods” writ large, even though intelligence agencies often rely on open sources and many of their methods are well known. The categories also include “foreign relations or foreign activities of the United States,” which encompasses much of what we read in the newspaper every day. While it’s true that officials need a fair amount of discretion in making assessments of national security harm, that discretion should not be entirely unfettered. I would like to see a White House–led commission of senior agency officials charged with tightening the criteria for classification and providing a definition of “damage to the national security” that sets an appropriately high bar.

The new criteria also should expand on the categories of information that may not be classified. In recent years, the government has advanced a dangerous new argument that information may be classified if our enemies could use it as anti-U.S. “propaganda.” The government has used this argument to shield photos and videos of Guantánamo detainees, for instance. Of course, the worse the U.S. government’s conduct, the more likely our enemies could use it to generate anti-U.S. sentiment. Indeed, the government’s theory provides a convenient end-run around the current prohibition on classifying information to “conceal violations of law”: The government can simply argue that the purpose of classification was to deny our enemies a propaganda opportunity. The resulting ability to classify the government’s worst abuses endangers core accountability and rule-of-law principles. New classification criteria should make clear that concerns about propaganda are not a legitimate basis for classification.

They should also rein in the practice of secret law. Increasingly, the executive branch is classifying rules and legal interpretations that set binding standards for government conduct. This secret body of law includes not only Office of Legal Counsel opinions—perhaps the best-known source of secret law—but also presidential directives, intelligence agencies’ rules and regulations, and unpublished international agreements that have the force of treaties. As I explained in a 2016 Brennan Center report, classifying legal authorities raises serious constitutional questions and leads to distinct democratic harms. Executive branch officials should not have the option of classifying pure legal analysis or legal standards.

Whatever the applicable limits on classification, officials will continue to exceed them until the incentives change. In his conclusion, Connelly suggests that in the longer run, it might become necessary for agencies to treat the wrongful withholding of information as seriously as they treat unauthorized disclosures. I agree, except that I wouldn’t wait. Agencies should act now to implement systems for holding officials accountable for overclassification. Classifiers should be required to document their reasoning (the National Geospatial-Intelligence Agency already does this, according to a recent report of the government office that oversees classification policy). Agencies should then conduct periodic spot audits, reviewing classification decisions and the supporting documentation. Officials found to engage in intentional, negligent, or routine overclassification should be subject to mandatory penalties. On the flip side, officials should be granted “safe harbor” for good-faith decisions not to classify information. Agencies might also consider giving small cash awards to employees who bring successful challenges to classification decisions.

Finally, the entire classification system rests on the premise that shielding information is an effective method of protecting national security. Given the practical difficulties with securing data in the digital era, it might be time to rethink this premise. Indeed, experts have already begun to question the utility of secrecy. In a 2005 memo, Defense Secretary Donald Rumsfeld concluded that “[t]he United States Government is incapable of keeping a secret. If one accepts that, and I do, that means that the U.S. Government will have to craft policies that reflect that reality.” In 2011, a distinguished group of national security officials and experts convened at a workshop to discuss how the United States might revamp its national security strategies for a world where data hacks and “insider threats” arguably make secrecy unsustainable, if not impossible. Reversing the secrecy-obsessed mindset that has permeated the national security state since its inception is easier said than done. But it might prove crucial to our security. We would never place our faith in, say, a missile defense system that performed as poorly as the secrecy system does today.

Of course, there is zero chance of the Trump administration taking up any of these proposals and little chance that a future one will pursue more than incremental change. Executive branch officials are far too committed to the secrecy system and unlikely to embrace limits on their own authority. Accordingly, my final recommendation picks up on a point that Connelly touches on in his conclusion: Congress must end its decades of abdication in this area. There are constitutional dimensions to the president’s power to classify information, but it does not follow that Congress is powerless. It has acted boldly in the past. FOIA, for instance, authorizes judges to overturn presidential classification decisions, although they almost never do so. Many of the reforms sketched above could be mandated or incentivized by Congress. The next Congress, or a future one, should flex its constitutional muscle and exert some control over the runaway classification regime. That would be a win for democracy, the rule of law, and — as Connelly has shown — history.

I proposed this measure, along with others discussed here, in a short white paper solicited by the Public Interest Declassification Board, a presidential advisory group.

These proposals are explained in more detail in another Brennan Center report.

© 2018, Elizabeth Goitein. 


Cite as: Elizabeth Goitein, Rescuing History (and Accountability) from Secrecy, 18-05.b Knight First Amend. Inst. (Sept. 13, 2018), [].